00:00:00.001 Started by upstream project "autotest-per-patch" build number 132773 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.035 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.036 The recommended git tool is: git 00:00:00.036 using credential 00000000-0000-0000-0000-000000000002 00:00:00.037 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.071 Fetching changes from the remote Git repository 00:00:00.074 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.109 Using shallow fetch with depth 1 00:00:00.109 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.109 > git --version # timeout=10 00:00:00.151 > git --version # 'git version 2.39.2' 00:00:00.151 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.188 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.188 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.617 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.633 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.654 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.654 > git config core.sparsecheckout # timeout=10 00:00:02.676 > git read-tree -mu HEAD # timeout=10 00:00:02.698 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.734 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.734 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.856 [Pipeline] Start of Pipeline 00:00:02.870 [Pipeline] library 00:00:02.872 Loading library shm_lib@master 00:00:02.872 Library shm_lib@master is cached. Copying from home. 00:00:02.890 [Pipeline] node 01:01:49.375 Still waiting to schedule task 01:01:49.375 Waiting for next available executor on ‘vagrant-vm-host’ 01:12:57.984 Running on VM-host-SM38 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 01:12:57.987 [Pipeline] { 01:12:57.999 [Pipeline] catchError 01:12:58.001 [Pipeline] { 01:12:58.016 [Pipeline] wrap 01:12:58.027 [Pipeline] { 01:12:58.037 [Pipeline] stage 01:12:58.039 [Pipeline] { (Prologue) 01:12:58.063 [Pipeline] echo 01:12:58.065 Node: VM-host-SM38 01:12:58.072 [Pipeline] cleanWs 01:12:58.084 [WS-CLEANUP] Deleting project workspace... 01:12:58.084 [WS-CLEANUP] Deferred wipeout is used... 01:12:58.093 [WS-CLEANUP] done 01:12:58.352 [Pipeline] setCustomBuildProperty 01:12:58.411 [Pipeline] httpRequest 01:12:58.817 [Pipeline] echo 01:12:58.819 Sorcerer 10.211.164.101 is alive 01:12:58.830 [Pipeline] retry 01:12:58.832 [Pipeline] { 01:12:58.847 [Pipeline] httpRequest 01:12:58.853 HttpMethod: GET 01:12:58.854 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 01:12:58.854 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 01:12:58.855 Response Code: HTTP/1.1 200 OK 01:12:58.856 Success: Status code 200 is in the accepted range: 200,404 01:12:58.857 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 01:12:59.001 [Pipeline] } 01:12:59.019 [Pipeline] // retry 01:12:59.027 [Pipeline] sh 01:12:59.314 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 01:12:59.332 [Pipeline] httpRequest 01:12:59.733 [Pipeline] echo 01:12:59.735 Sorcerer 10.211.164.101 is alive 01:12:59.746 [Pipeline] retry 01:12:59.749 [Pipeline] { 01:12:59.768 [Pipeline] httpRequest 01:12:59.775 HttpMethod: GET 01:12:59.775 URL: http://10.211.164.101/packages/spdk_66902d69af506c19fa2a7701832daf75f8183e0d.tar.gz 01:12:59.776 Sending request to url: http://10.211.164.101/packages/spdk_66902d69af506c19fa2a7701832daf75f8183e0d.tar.gz 01:12:59.778 Response Code: HTTP/1.1 200 OK 01:12:59.778 Success: Status code 200 is in the accepted range: 200,404 01:12:59.779 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_66902d69af506c19fa2a7701832daf75f8183e0d.tar.gz 01:13:02.045 [Pipeline] } 01:13:02.069 [Pipeline] // retry 01:13:02.078 [Pipeline] sh 01:13:02.402 + tar --no-same-owner -xf spdk_66902d69af506c19fa2a7701832daf75f8183e0d.tar.gz 01:13:05.710 [Pipeline] sh 01:13:05.995 + git -C spdk log --oneline -n5 01:13:05.995 66902d69a env: explicitly set --legacy-mem flag in no hugepages mode 01:13:05.995 421ce3854 env: add mem_map_fini and vtophys_fini to cleanup mem maps 01:13:05.995 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 01:13:05.995 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 01:13:05.995 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 01:13:06.017 [Pipeline] writeFile 01:13:06.035 [Pipeline] sh 01:13:06.325 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 01:13:06.339 [Pipeline] sh 01:13:06.623 + cat autorun-spdk.conf 01:13:06.623 SPDK_RUN_FUNCTIONAL_TEST=1 01:13:06.623 SPDK_TEST_NVMF=1 01:13:06.623 SPDK_TEST_NVMF_TRANSPORT=tcp 01:13:06.623 SPDK_TEST_USDT=1 01:13:06.623 SPDK_TEST_NVMF_MDNS=1 01:13:06.623 SPDK_RUN_UBSAN=1 01:13:06.623 NET_TYPE=virt 01:13:06.623 SPDK_JSONRPC_GO_CLIENT=1 01:13:06.623 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 01:13:06.631 RUN_NIGHTLY=0 01:13:06.633 [Pipeline] } 01:13:06.647 [Pipeline] // stage 01:13:06.664 [Pipeline] stage 01:13:06.666 [Pipeline] { (Run VM) 01:13:06.680 [Pipeline] sh 01:13:06.976 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 01:13:06.976 + echo 'Start stage prepare_nvme.sh' 01:13:06.976 Start stage prepare_nvme.sh 01:13:06.976 + [[ -n 4 ]] 01:13:06.976 + disk_prefix=ex4 01:13:06.976 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 01:13:06.976 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 01:13:06.976 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 01:13:06.976 ++ SPDK_RUN_FUNCTIONAL_TEST=1 01:13:06.976 ++ SPDK_TEST_NVMF=1 01:13:06.976 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 01:13:06.976 ++ SPDK_TEST_USDT=1 01:13:06.976 ++ SPDK_TEST_NVMF_MDNS=1 01:13:06.976 ++ SPDK_RUN_UBSAN=1 01:13:06.976 ++ NET_TYPE=virt 01:13:06.976 ++ SPDK_JSONRPC_GO_CLIENT=1 01:13:06.976 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 01:13:06.976 ++ RUN_NIGHTLY=0 01:13:06.976 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 01:13:06.976 + nvme_files=() 01:13:06.976 + declare -A nvme_files 01:13:06.976 + backend_dir=/var/lib/libvirt/images/backends 01:13:06.976 + nvme_files['nvme.img']=5G 01:13:06.976 + nvme_files['nvme-cmb.img']=5G 01:13:06.976 + nvme_files['nvme-multi0.img']=4G 01:13:06.976 + nvme_files['nvme-multi1.img']=4G 01:13:06.976 + nvme_files['nvme-multi2.img']=4G 01:13:06.976 + nvme_files['nvme-openstack.img']=8G 01:13:06.976 + nvme_files['nvme-zns.img']=5G 01:13:06.976 + (( SPDK_TEST_NVME_PMR == 1 )) 01:13:06.976 + (( SPDK_TEST_FTL == 1 )) 01:13:06.976 + (( SPDK_TEST_NVME_FDP == 1 )) 01:13:06.976 + [[ ! -d /var/lib/libvirt/images/backends ]] 01:13:06.976 + for nvme in "${!nvme_files[@]}" 01:13:06.976 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 01:13:06.976 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 01:13:06.976 + for nvme in "${!nvme_files[@]}" 01:13:06.976 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 01:13:06.976 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 01:13:06.976 + for nvme in "${!nvme_files[@]}" 01:13:06.976 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 01:13:06.976 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 01:13:06.976 + for nvme in "${!nvme_files[@]}" 01:13:06.977 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 01:13:06.977 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 01:13:06.977 + for nvme in "${!nvme_files[@]}" 01:13:06.977 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 01:13:06.977 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 01:13:06.977 + for nvme in "${!nvme_files[@]}" 01:13:06.977 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 01:13:06.977 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 01:13:06.977 + for nvme in "${!nvme_files[@]}" 01:13:06.977 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 01:13:06.977 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 01:13:06.977 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 01:13:06.977 + echo 'End stage prepare_nvme.sh' 01:13:06.977 End stage prepare_nvme.sh 01:13:06.991 [Pipeline] sh 01:13:07.275 + DISTRO=fedora39 01:13:07.276 + CPUS=10 01:13:07.276 + RAM=12288 01:13:07.276 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 01:13:07.276 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 01:13:07.276 01:13:07.276 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 01:13:07.276 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 01:13:07.276 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 01:13:07.276 HELP=0 01:13:07.276 DRY_RUN=0 01:13:07.276 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 01:13:07.276 NVME_DISKS_TYPE=nvme,nvme, 01:13:07.276 NVME_AUTO_CREATE=0 01:13:07.276 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 01:13:07.276 NVME_CMB=,, 01:13:07.276 NVME_PMR=,, 01:13:07.276 NVME_ZNS=,, 01:13:07.276 NVME_MS=,, 01:13:07.276 NVME_FDP=,, 01:13:07.276 SPDK_VAGRANT_DISTRO=fedora39 01:13:07.276 SPDK_VAGRANT_VMCPU=10 01:13:07.276 SPDK_VAGRANT_VMRAM=12288 01:13:07.276 SPDK_VAGRANT_PROVIDER=libvirt 01:13:07.276 SPDK_VAGRANT_HTTP_PROXY= 01:13:07.276 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 01:13:07.276 SPDK_OPENSTACK_NETWORK=0 01:13:07.276 VAGRANT_PACKAGE_BOX=0 01:13:07.276 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 01:13:07.276 FORCE_DISTRO=true 01:13:07.276 VAGRANT_BOX_VERSION= 01:13:07.276 EXTRA_VAGRANTFILES= 01:13:07.276 NIC_MODEL=e1000 01:13:07.276 01:13:07.276 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 01:13:07.276 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 01:13:09.868 Bringing machine 'default' up with 'libvirt' provider... 01:13:09.868 ==> default: Creating image (snapshot of base box volume). 01:13:10.130 ==> default: Creating domain with the following settings... 01:13:10.130 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733720881_8fb4cfa07ab279e5ee5a 01:13:10.130 ==> default: -- Domain type: kvm 01:13:10.130 ==> default: -- Cpus: 10 01:13:10.130 ==> default: -- Feature: acpi 01:13:10.130 ==> default: -- Feature: apic 01:13:10.130 ==> default: -- Feature: pae 01:13:10.130 ==> default: -- Memory: 12288M 01:13:10.130 ==> default: -- Memory Backing: hugepages: 01:13:10.130 ==> default: -- Management MAC: 01:13:10.130 ==> default: -- Loader: 01:13:10.130 ==> default: -- Nvram: 01:13:10.130 ==> default: -- Base box: spdk/fedora39 01:13:10.130 ==> default: -- Storage pool: default 01:13:10.130 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733720881_8fb4cfa07ab279e5ee5a.img (20G) 01:13:10.130 ==> default: -- Volume Cache: default 01:13:10.130 ==> default: -- Kernel: 01:13:10.130 ==> default: -- Initrd: 01:13:10.130 ==> default: -- Graphics Type: vnc 01:13:10.130 ==> default: -- Graphics Port: -1 01:13:10.130 ==> default: -- Graphics IP: 127.0.0.1 01:13:10.130 ==> default: -- Graphics Password: Not defined 01:13:10.130 ==> default: -- Video Type: cirrus 01:13:10.130 ==> default: -- Video VRAM: 9216 01:13:10.130 ==> default: -- Sound Type: 01:13:10.130 ==> default: -- Keymap: en-us 01:13:10.130 ==> default: -- TPM Path: 01:13:10.130 ==> default: -- INPUT: type=mouse, bus=ps2 01:13:10.130 ==> default: -- Command line args: 01:13:10.130 ==> default: -> value=-device, 01:13:10.130 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 01:13:10.130 ==> default: -> value=-drive, 01:13:10.130 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 01:13:10.130 ==> default: -> value=-device, 01:13:10.130 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 01:13:10.130 ==> default: -> value=-device, 01:13:10.130 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 01:13:10.130 ==> default: -> value=-drive, 01:13:10.130 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 01:13:10.130 ==> default: -> value=-device, 01:13:10.130 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 01:13:10.130 ==> default: -> value=-drive, 01:13:10.130 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 01:13:10.130 ==> default: -> value=-device, 01:13:10.130 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 01:13:10.130 ==> default: -> value=-drive, 01:13:10.130 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 01:13:10.130 ==> default: -> value=-device, 01:13:10.130 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 01:13:10.130 ==> default: Creating shared folders metadata... 01:13:10.130 ==> default: Starting domain. 01:13:11.072 ==> default: Waiting for domain to get an IP address... 01:13:25.975 ==> default: Waiting for SSH to become available... 01:13:25.975 ==> default: Configuring and enabling network interfaces... 01:13:29.273 default: SSH address: 192.168.121.127:22 01:13:29.273 default: SSH username: vagrant 01:13:29.273 default: SSH auth method: private key 01:13:30.675 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 01:13:37.241 ==> default: Mounting SSHFS shared folder... 01:13:37.812 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 01:13:37.812 ==> default: Checking Mount.. 01:13:39.214 ==> default: Folder Successfully Mounted! 01:13:39.214 01:13:39.214 SUCCESS! 01:13:39.214 01:13:39.214 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 01:13:39.214 Use vagrant "suspend" and vagrant "resume" to stop and start. 01:13:39.214 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 01:13:39.214 01:13:39.224 [Pipeline] } 01:13:39.240 [Pipeline] // stage 01:13:39.251 [Pipeline] dir 01:13:39.252 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 01:13:39.254 [Pipeline] { 01:13:39.266 [Pipeline] catchError 01:13:39.268 [Pipeline] { 01:13:39.281 [Pipeline] sh 01:13:39.567 + vagrant ssh-config --host vagrant 01:13:39.567 + sed -ne '/^Host/,$p' 01:13:39.567 + tee ssh_conf 01:13:42.139 Host vagrant 01:13:42.139 HostName 192.168.121.127 01:13:42.139 User vagrant 01:13:42.139 Port 22 01:13:42.139 UserKnownHostsFile /dev/null 01:13:42.139 StrictHostKeyChecking no 01:13:42.139 PasswordAuthentication no 01:13:42.139 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 01:13:42.139 IdentitiesOnly yes 01:13:42.139 LogLevel FATAL 01:13:42.139 ForwardAgent yes 01:13:42.139 ForwardX11 yes 01:13:42.139 01:13:42.155 [Pipeline] withEnv 01:13:42.157 [Pipeline] { 01:13:42.168 [Pipeline] sh 01:13:42.443 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 01:13:42.443 source /etc/os-release 01:13:42.443 [[ -e /image.version ]] && img=$(< /image.version) 01:13:42.443 # Minimal, systemd-like check. 01:13:42.443 if [[ -e /.dockerenv ]]; then 01:13:42.443 # Clear garbage from the node'\''s name: 01:13:42.443 # agt-er_autotest_547-896 -> autotest_547-896 01:13:42.443 # $HOSTNAME is the actual container id 01:13:42.443 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 01:13:42.443 if grep -q "/etc/hostname" /proc/self/mountinfo; then 01:13:42.443 # We can assume this is a mount from a host where container is running, 01:13:42.443 # so fetch its hostname to easily identify the target swarm worker. 01:13:42.443 container="$(< /etc/hostname) ($agent)" 01:13:42.443 else 01:13:42.443 # Fallback 01:13:42.443 container=$agent 01:13:42.443 fi 01:13:42.443 fi 01:13:42.443 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 01:13:42.443 ' 01:13:42.451 [Pipeline] } 01:13:42.464 [Pipeline] // withEnv 01:13:42.472 [Pipeline] setCustomBuildProperty 01:13:42.487 [Pipeline] stage 01:13:42.490 [Pipeline] { (Tests) 01:13:42.506 [Pipeline] sh 01:13:42.782 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 01:13:42.794 [Pipeline] sh 01:13:43.086 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 01:13:43.099 [Pipeline] timeout 01:13:43.099 Timeout set to expire in 1 hr 0 min 01:13:43.101 [Pipeline] { 01:13:43.115 [Pipeline] sh 01:13:43.396 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 01:13:43.959 HEAD is now at 66902d69a env: explicitly set --legacy-mem flag in no hugepages mode 01:13:43.969 [Pipeline] sh 01:13:44.241 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 01:13:44.252 [Pipeline] sh 01:13:44.525 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 01:13:44.538 [Pipeline] sh 01:13:44.812 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo' 01:13:44.812 ++ readlink -f spdk_repo 01:13:45.070 + DIR_ROOT=/home/vagrant/spdk_repo 01:13:45.070 + [[ -n /home/vagrant/spdk_repo ]] 01:13:45.070 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 01:13:45.070 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 01:13:45.070 + [[ -d /home/vagrant/spdk_repo/spdk ]] 01:13:45.070 + [[ ! -d /home/vagrant/spdk_repo/output ]] 01:13:45.070 + [[ -d /home/vagrant/spdk_repo/output ]] 01:13:45.070 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 01:13:45.070 + cd /home/vagrant/spdk_repo 01:13:45.070 + source /etc/os-release 01:13:45.070 ++ NAME='Fedora Linux' 01:13:45.070 ++ VERSION='39 (Cloud Edition)' 01:13:45.070 ++ ID=fedora 01:13:45.070 ++ VERSION_ID=39 01:13:45.070 ++ VERSION_CODENAME= 01:13:45.070 ++ PLATFORM_ID=platform:f39 01:13:45.070 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 01:13:45.070 ++ ANSI_COLOR='0;38;2;60;110;180' 01:13:45.070 ++ LOGO=fedora-logo-icon 01:13:45.070 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 01:13:45.070 ++ HOME_URL=https://fedoraproject.org/ 01:13:45.070 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 01:13:45.070 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 01:13:45.070 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 01:13:45.070 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 01:13:45.070 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 01:13:45.070 ++ REDHAT_SUPPORT_PRODUCT=Fedora 01:13:45.070 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 01:13:45.070 ++ SUPPORT_END=2024-11-12 01:13:45.070 ++ VARIANT='Cloud Edition' 01:13:45.070 ++ VARIANT_ID=cloud 01:13:45.070 + uname -a 01:13:45.070 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 01:13:45.070 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 01:13:45.330 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:13:45.330 Hugepages 01:13:45.330 node hugesize free / total 01:13:45.330 node0 1048576kB 0 / 0 01:13:45.330 node0 2048kB 0 / 0 01:13:45.330 01:13:45.330 Type BDF Vendor Device NUMA Driver Device Block devices 01:13:45.330 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 01:13:45.330 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 01:13:45.330 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 01:13:45.330 + rm -f /tmp/spdk-ld-path 01:13:45.330 + source autorun-spdk.conf 01:13:45.330 ++ SPDK_RUN_FUNCTIONAL_TEST=1 01:13:45.330 ++ SPDK_TEST_NVMF=1 01:13:45.330 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 01:13:45.330 ++ SPDK_TEST_USDT=1 01:13:45.330 ++ SPDK_TEST_NVMF_MDNS=1 01:13:45.330 ++ SPDK_RUN_UBSAN=1 01:13:45.330 ++ NET_TYPE=virt 01:13:45.330 ++ SPDK_JSONRPC_GO_CLIENT=1 01:13:45.330 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 01:13:45.330 ++ RUN_NIGHTLY=0 01:13:45.330 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 01:13:45.330 + [[ -n '' ]] 01:13:45.330 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 01:13:45.330 + for M in /var/spdk/build-*-manifest.txt 01:13:45.330 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 01:13:45.330 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 01:13:45.330 + for M in /var/spdk/build-*-manifest.txt 01:13:45.330 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 01:13:45.330 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 01:13:45.330 + for M in /var/spdk/build-*-manifest.txt 01:13:45.330 + [[ -f /var/spdk/build-repo-manifest.txt ]] 01:13:45.330 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 01:13:45.330 ++ uname 01:13:45.330 + [[ Linux == \L\i\n\u\x ]] 01:13:45.330 + sudo dmesg -T 01:13:45.330 + sudo dmesg --clear 01:13:45.589 + dmesg_pid=4985 01:13:45.589 + [[ Fedora Linux == FreeBSD ]] 01:13:45.589 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 01:13:45.589 + UNBIND_ENTIRE_IOMMU_GROUP=yes 01:13:45.589 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 01:13:45.589 + sudo dmesg -Tw 01:13:45.589 + [[ -x /usr/src/fio-static/fio ]] 01:13:45.589 + export FIO_BIN=/usr/src/fio-static/fio 01:13:45.589 + FIO_BIN=/usr/src/fio-static/fio 01:13:45.589 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 01:13:45.589 + [[ ! -v VFIO_QEMU_BIN ]] 01:13:45.589 + [[ -e /usr/local/qemu/vfio-user-latest ]] 01:13:45.589 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 01:13:45.589 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 01:13:45.589 + [[ -e /usr/local/qemu/vanilla-latest ]] 01:13:45.589 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 01:13:45.589 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 01:13:45.589 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 01:13:45.589 05:08:37 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 01:13:45.589 05:08:37 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 01:13:45.589 05:08:37 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 01:13:45.589 05:08:37 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 01:13:45.589 05:08:37 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 01:13:45.589 05:08:37 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_USDT=1 01:13:45.589 05:08:37 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_MDNS=1 01:13:45.589 05:08:37 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 01:13:45.589 05:08:37 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 01:13:45.589 05:08:37 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_JSONRPC_GO_CLIENT=1 01:13:45.589 05:08:37 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 01:13:45.589 05:08:37 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 01:13:45.589 05:08:37 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 01:13:45.589 05:08:37 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 01:13:45.589 05:08:37 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 01:13:45.589 05:08:37 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:13:45.589 05:08:37 -- scripts/common.sh@15 -- $ shopt -s extglob 01:13:45.590 05:08:37 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 01:13:45.590 05:08:37 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:13:45.590 05:08:37 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 01:13:45.590 05:08:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:45.590 05:08:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:45.590 05:08:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:45.590 05:08:37 -- paths/export.sh@5 -- $ export PATH 01:13:45.590 05:08:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:45.590 05:08:37 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 01:13:45.590 05:08:37 -- common/autobuild_common.sh@493 -- $ date +%s 01:13:45.590 05:08:37 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733720917.XXXXXX 01:13:45.590 05:08:37 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733720917.wNvlJg 01:13:45.590 05:08:37 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 01:13:45.590 05:08:37 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 01:13:45.590 05:08:37 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 01:13:45.590 05:08:37 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 01:13:45.590 05:08:37 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 01:13:45.590 05:08:37 -- common/autobuild_common.sh@509 -- $ get_config_params 01:13:45.590 05:08:37 -- common/autotest_common.sh@409 -- $ xtrace_disable 01:13:45.590 05:08:37 -- common/autotest_common.sh@10 -- $ set +x 01:13:45.590 05:08:37 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 01:13:45.590 05:08:37 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 01:13:45.590 05:08:37 -- pm/common@17 -- $ local monitor 01:13:45.590 05:08:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:13:45.590 05:08:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:13:45.590 05:08:37 -- pm/common@25 -- $ sleep 1 01:13:45.590 05:08:37 -- pm/common@21 -- $ date +%s 01:13:45.590 05:08:37 -- pm/common@21 -- $ date +%s 01:13:45.590 05:08:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733720917 01:13:45.590 05:08:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733720917 01:13:45.590 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733720917_collect-cpu-load.pm.log 01:13:45.590 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733720917_collect-vmstat.pm.log 01:13:46.524 05:08:38 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 01:13:46.524 05:08:38 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 01:13:46.524 05:08:38 -- spdk/autobuild.sh@12 -- $ umask 022 01:13:46.524 05:08:38 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 01:13:46.524 05:08:38 -- spdk/autobuild.sh@16 -- $ date -u 01:13:46.524 Mon Dec 9 05:08:38 AM UTC 2024 01:13:46.524 05:08:38 -- spdk/autobuild.sh@17 -- $ git describe --tags 01:13:46.525 v25.01-pre-278-g66902d69a 01:13:46.525 05:08:38 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 01:13:46.525 05:08:38 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 01:13:46.525 05:08:38 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 01:13:46.525 05:08:38 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 01:13:46.525 05:08:38 -- common/autotest_common.sh@1111 -- $ xtrace_disable 01:13:46.525 05:08:38 -- common/autotest_common.sh@10 -- $ set +x 01:13:46.525 ************************************ 01:13:46.525 START TEST ubsan 01:13:46.525 ************************************ 01:13:46.525 using ubsan 01:13:46.525 05:08:38 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 01:13:46.525 01:13:46.525 real 0m0.000s 01:13:46.525 user 0m0.000s 01:13:46.525 sys 0m0.000s 01:13:46.525 05:08:38 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 01:13:46.525 ************************************ 01:13:46.525 END TEST ubsan 01:13:46.525 ************************************ 01:13:46.525 05:08:38 ubsan -- common/autotest_common.sh@10 -- $ set +x 01:13:46.525 05:08:38 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 01:13:46.525 05:08:38 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 01:13:46.525 05:08:38 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 01:13:46.525 05:08:38 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 01:13:46.525 05:08:38 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 01:13:46.525 05:08:38 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 01:13:46.525 05:08:38 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 01:13:46.525 05:08:38 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 01:13:46.525 05:08:38 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 01:13:46.783 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 01:13:46.783 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 01:13:47.041 Using 'verbs' RDMA provider 01:13:57.635 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 01:14:07.631 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 01:14:07.631 go version go1.21.1 linux/amd64 01:14:07.631 Creating mk/config.mk...done. 01:14:07.631 Creating mk/cc.flags.mk...done. 01:14:07.631 Type 'make' to build. 01:14:07.631 05:08:58 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 01:14:07.631 05:08:58 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 01:14:07.631 05:08:58 -- common/autotest_common.sh@1111 -- $ xtrace_disable 01:14:07.631 05:08:58 -- common/autotest_common.sh@10 -- $ set +x 01:14:07.631 ************************************ 01:14:07.631 START TEST make 01:14:07.631 ************************************ 01:14:07.631 05:08:58 make -- common/autotest_common.sh@1129 -- $ make -j10 01:14:07.631 make[1]: Nothing to be done for 'all'. 01:14:17.602 The Meson build system 01:14:17.602 Version: 1.5.0 01:14:17.602 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 01:14:17.602 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 01:14:17.602 Build type: native build 01:14:17.602 Program cat found: YES (/usr/bin/cat) 01:14:17.602 Project name: DPDK 01:14:17.602 Project version: 24.03.0 01:14:17.602 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 01:14:17.602 C linker for the host machine: cc ld.bfd 2.40-14 01:14:17.602 Host machine cpu family: x86_64 01:14:17.602 Host machine cpu: x86_64 01:14:17.602 Message: ## Building in Developer Mode ## 01:14:17.602 Program pkg-config found: YES (/usr/bin/pkg-config) 01:14:17.602 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 01:14:17.602 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 01:14:17.602 Program python3 found: YES (/usr/bin/python3) 01:14:17.602 Program cat found: YES (/usr/bin/cat) 01:14:17.602 Compiler for C supports arguments -march=native: YES 01:14:17.602 Checking for size of "void *" : 8 01:14:17.602 Checking for size of "void *" : 8 (cached) 01:14:17.602 Compiler for C supports link arguments -Wl,--undefined-version: YES 01:14:17.602 Library m found: YES 01:14:17.602 Library numa found: YES 01:14:17.602 Has header "numaif.h" : YES 01:14:17.602 Library fdt found: NO 01:14:17.602 Library execinfo found: NO 01:14:17.602 Has header "execinfo.h" : YES 01:14:17.602 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 01:14:17.602 Run-time dependency libarchive found: NO (tried pkgconfig) 01:14:17.602 Run-time dependency libbsd found: NO (tried pkgconfig) 01:14:17.602 Run-time dependency jansson found: NO (tried pkgconfig) 01:14:17.602 Run-time dependency openssl found: YES 3.1.1 01:14:17.602 Run-time dependency libpcap found: YES 1.10.4 01:14:17.602 Has header "pcap.h" with dependency libpcap: YES 01:14:17.602 Compiler for C supports arguments -Wcast-qual: YES 01:14:17.602 Compiler for C supports arguments -Wdeprecated: YES 01:14:17.602 Compiler for C supports arguments -Wformat: YES 01:14:17.602 Compiler for C supports arguments -Wformat-nonliteral: NO 01:14:17.602 Compiler for C supports arguments -Wformat-security: NO 01:14:17.602 Compiler for C supports arguments -Wmissing-declarations: YES 01:14:17.602 Compiler for C supports arguments -Wmissing-prototypes: YES 01:14:17.602 Compiler for C supports arguments -Wnested-externs: YES 01:14:17.602 Compiler for C supports arguments -Wold-style-definition: YES 01:14:17.602 Compiler for C supports arguments -Wpointer-arith: YES 01:14:17.602 Compiler for C supports arguments -Wsign-compare: YES 01:14:17.602 Compiler for C supports arguments -Wstrict-prototypes: YES 01:14:17.602 Compiler for C supports arguments -Wundef: YES 01:14:17.602 Compiler for C supports arguments -Wwrite-strings: YES 01:14:17.602 Compiler for C supports arguments -Wno-address-of-packed-member: YES 01:14:17.602 Compiler for C supports arguments -Wno-packed-not-aligned: YES 01:14:17.602 Compiler for C supports arguments -Wno-missing-field-initializers: YES 01:14:17.602 Compiler for C supports arguments -Wno-zero-length-bounds: YES 01:14:17.602 Program objdump found: YES (/usr/bin/objdump) 01:14:17.602 Compiler for C supports arguments -mavx512f: YES 01:14:17.602 Checking if "AVX512 checking" compiles: YES 01:14:17.602 Fetching value of define "__SSE4_2__" : 1 01:14:17.602 Fetching value of define "__AES__" : 1 01:14:17.602 Fetching value of define "__AVX__" : 1 01:14:17.602 Fetching value of define "__AVX2__" : 1 01:14:17.603 Fetching value of define "__AVX512BW__" : 1 01:14:17.603 Fetching value of define "__AVX512CD__" : 1 01:14:17.603 Fetching value of define "__AVX512DQ__" : 1 01:14:17.603 Fetching value of define "__AVX512F__" : 1 01:14:17.603 Fetching value of define "__AVX512VL__" : 1 01:14:17.603 Fetching value of define "__PCLMUL__" : 1 01:14:17.603 Fetching value of define "__RDRND__" : 1 01:14:17.603 Fetching value of define "__RDSEED__" : 1 01:14:17.603 Fetching value of define "__VPCLMULQDQ__" : 1 01:14:17.603 Fetching value of define "__znver1__" : (undefined) 01:14:17.603 Fetching value of define "__znver2__" : (undefined) 01:14:17.603 Fetching value of define "__znver3__" : (undefined) 01:14:17.603 Fetching value of define "__znver4__" : (undefined) 01:14:17.603 Compiler for C supports arguments -Wno-format-truncation: YES 01:14:17.603 Message: lib/log: Defining dependency "log" 01:14:17.603 Message: lib/kvargs: Defining dependency "kvargs" 01:14:17.603 Message: lib/telemetry: Defining dependency "telemetry" 01:14:17.603 Checking for function "getentropy" : NO 01:14:17.603 Message: lib/eal: Defining dependency "eal" 01:14:17.603 Message: lib/ring: Defining dependency "ring" 01:14:17.603 Message: lib/rcu: Defining dependency "rcu" 01:14:17.603 Message: lib/mempool: Defining dependency "mempool" 01:14:17.603 Message: lib/mbuf: Defining dependency "mbuf" 01:14:17.603 Fetching value of define "__PCLMUL__" : 1 (cached) 01:14:17.603 Fetching value of define "__AVX512F__" : 1 (cached) 01:14:17.603 Fetching value of define "__AVX512BW__" : 1 (cached) 01:14:17.603 Fetching value of define "__AVX512DQ__" : 1 (cached) 01:14:17.603 Fetching value of define "__AVX512VL__" : 1 (cached) 01:14:17.603 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 01:14:17.603 Compiler for C supports arguments -mpclmul: YES 01:14:17.603 Compiler for C supports arguments -maes: YES 01:14:17.603 Compiler for C supports arguments -mavx512f: YES (cached) 01:14:17.603 Compiler for C supports arguments -mavx512bw: YES 01:14:17.603 Compiler for C supports arguments -mavx512dq: YES 01:14:17.603 Compiler for C supports arguments -mavx512vl: YES 01:14:17.603 Compiler for C supports arguments -mvpclmulqdq: YES 01:14:17.603 Compiler for C supports arguments -mavx2: YES 01:14:17.603 Compiler for C supports arguments -mavx: YES 01:14:17.603 Message: lib/net: Defining dependency "net" 01:14:17.603 Message: lib/meter: Defining dependency "meter" 01:14:17.603 Message: lib/ethdev: Defining dependency "ethdev" 01:14:17.603 Message: lib/pci: Defining dependency "pci" 01:14:17.603 Message: lib/cmdline: Defining dependency "cmdline" 01:14:17.603 Message: lib/hash: Defining dependency "hash" 01:14:17.603 Message: lib/timer: Defining dependency "timer" 01:14:17.603 Message: lib/compressdev: Defining dependency "compressdev" 01:14:17.603 Message: lib/cryptodev: Defining dependency "cryptodev" 01:14:17.603 Message: lib/dmadev: Defining dependency "dmadev" 01:14:17.603 Compiler for C supports arguments -Wno-cast-qual: YES 01:14:17.603 Message: lib/power: Defining dependency "power" 01:14:17.603 Message: lib/reorder: Defining dependency "reorder" 01:14:17.603 Message: lib/security: Defining dependency "security" 01:14:17.603 Has header "linux/userfaultfd.h" : YES 01:14:17.603 Has header "linux/vduse.h" : YES 01:14:17.603 Message: lib/vhost: Defining dependency "vhost" 01:14:17.603 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 01:14:17.603 Message: drivers/bus/pci: Defining dependency "bus_pci" 01:14:17.603 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 01:14:17.603 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 01:14:17.603 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 01:14:17.603 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 01:14:17.603 Message: Disabling ml/* drivers: missing internal dependency "mldev" 01:14:17.603 Message: Disabling event/* drivers: missing internal dependency "eventdev" 01:14:17.603 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 01:14:17.603 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 01:14:17.603 Program doxygen found: YES (/usr/local/bin/doxygen) 01:14:17.603 Configuring doxy-api-html.conf using configuration 01:14:17.603 Configuring doxy-api-man.conf using configuration 01:14:17.603 Program mandb found: YES (/usr/bin/mandb) 01:14:17.603 Program sphinx-build found: NO 01:14:17.603 Configuring rte_build_config.h using configuration 01:14:17.603 Message: 01:14:17.603 ================= 01:14:17.603 Applications Enabled 01:14:17.603 ================= 01:14:17.603 01:14:17.603 apps: 01:14:17.603 01:14:17.603 01:14:17.603 Message: 01:14:17.603 ================= 01:14:17.603 Libraries Enabled 01:14:17.603 ================= 01:14:17.603 01:14:17.603 libs: 01:14:17.603 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 01:14:17.603 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 01:14:17.603 cryptodev, dmadev, power, reorder, security, vhost, 01:14:17.603 01:14:17.603 Message: 01:14:17.603 =============== 01:14:17.603 Drivers Enabled 01:14:17.603 =============== 01:14:17.603 01:14:17.603 common: 01:14:17.603 01:14:17.603 bus: 01:14:17.603 pci, vdev, 01:14:17.603 mempool: 01:14:17.603 ring, 01:14:17.603 dma: 01:14:17.603 01:14:17.603 net: 01:14:17.603 01:14:17.603 crypto: 01:14:17.603 01:14:17.603 compress: 01:14:17.603 01:14:17.603 vdpa: 01:14:17.603 01:14:17.603 01:14:17.603 Message: 01:14:17.603 ================= 01:14:17.603 Content Skipped 01:14:17.603 ================= 01:14:17.603 01:14:17.603 apps: 01:14:17.603 dumpcap: explicitly disabled via build config 01:14:17.603 graph: explicitly disabled via build config 01:14:17.603 pdump: explicitly disabled via build config 01:14:17.603 proc-info: explicitly disabled via build config 01:14:17.603 test-acl: explicitly disabled via build config 01:14:17.603 test-bbdev: explicitly disabled via build config 01:14:17.603 test-cmdline: explicitly disabled via build config 01:14:17.603 test-compress-perf: explicitly disabled via build config 01:14:17.603 test-crypto-perf: explicitly disabled via build config 01:14:17.603 test-dma-perf: explicitly disabled via build config 01:14:17.603 test-eventdev: explicitly disabled via build config 01:14:17.603 test-fib: explicitly disabled via build config 01:14:17.603 test-flow-perf: explicitly disabled via build config 01:14:17.603 test-gpudev: explicitly disabled via build config 01:14:17.603 test-mldev: explicitly disabled via build config 01:14:17.603 test-pipeline: explicitly disabled via build config 01:14:17.603 test-pmd: explicitly disabled via build config 01:14:17.603 test-regex: explicitly disabled via build config 01:14:17.603 test-sad: explicitly disabled via build config 01:14:17.603 test-security-perf: explicitly disabled via build config 01:14:17.603 01:14:17.603 libs: 01:14:17.603 argparse: explicitly disabled via build config 01:14:17.603 metrics: explicitly disabled via build config 01:14:17.603 acl: explicitly disabled via build config 01:14:17.603 bbdev: explicitly disabled via build config 01:14:17.603 bitratestats: explicitly disabled via build config 01:14:17.603 bpf: explicitly disabled via build config 01:14:17.603 cfgfile: explicitly disabled via build config 01:14:17.603 distributor: explicitly disabled via build config 01:14:17.603 efd: explicitly disabled via build config 01:14:17.603 eventdev: explicitly disabled via build config 01:14:17.603 dispatcher: explicitly disabled via build config 01:14:17.603 gpudev: explicitly disabled via build config 01:14:17.603 gro: explicitly disabled via build config 01:14:17.603 gso: explicitly disabled via build config 01:14:17.603 ip_frag: explicitly disabled via build config 01:14:17.603 jobstats: explicitly disabled via build config 01:14:17.603 latencystats: explicitly disabled via build config 01:14:17.603 lpm: explicitly disabled via build config 01:14:17.603 member: explicitly disabled via build config 01:14:17.603 pcapng: explicitly disabled via build config 01:14:17.603 rawdev: explicitly disabled via build config 01:14:17.603 regexdev: explicitly disabled via build config 01:14:17.603 mldev: explicitly disabled via build config 01:14:17.603 rib: explicitly disabled via build config 01:14:17.603 sched: explicitly disabled via build config 01:14:17.603 stack: explicitly disabled via build config 01:14:17.603 ipsec: explicitly disabled via build config 01:14:17.603 pdcp: explicitly disabled via build config 01:14:17.603 fib: explicitly disabled via build config 01:14:17.603 port: explicitly disabled via build config 01:14:17.603 pdump: explicitly disabled via build config 01:14:17.603 table: explicitly disabled via build config 01:14:17.603 pipeline: explicitly disabled via build config 01:14:17.603 graph: explicitly disabled via build config 01:14:17.603 node: explicitly disabled via build config 01:14:17.603 01:14:17.603 drivers: 01:14:17.603 common/cpt: not in enabled drivers build config 01:14:17.603 common/dpaax: not in enabled drivers build config 01:14:17.603 common/iavf: not in enabled drivers build config 01:14:17.603 common/idpf: not in enabled drivers build config 01:14:17.603 common/ionic: not in enabled drivers build config 01:14:17.603 common/mvep: not in enabled drivers build config 01:14:17.603 common/octeontx: not in enabled drivers build config 01:14:17.603 bus/auxiliary: not in enabled drivers build config 01:14:17.603 bus/cdx: not in enabled drivers build config 01:14:17.603 bus/dpaa: not in enabled drivers build config 01:14:17.603 bus/fslmc: not in enabled drivers build config 01:14:17.603 bus/ifpga: not in enabled drivers build config 01:14:17.603 bus/platform: not in enabled drivers build config 01:14:17.603 bus/uacce: not in enabled drivers build config 01:14:17.603 bus/vmbus: not in enabled drivers build config 01:14:17.603 common/cnxk: not in enabled drivers build config 01:14:17.603 common/mlx5: not in enabled drivers build config 01:14:17.603 common/nfp: not in enabled drivers build config 01:14:17.603 common/nitrox: not in enabled drivers build config 01:14:17.604 common/qat: not in enabled drivers build config 01:14:17.604 common/sfc_efx: not in enabled drivers build config 01:14:17.604 mempool/bucket: not in enabled drivers build config 01:14:17.604 mempool/cnxk: not in enabled drivers build config 01:14:17.604 mempool/dpaa: not in enabled drivers build config 01:14:17.604 mempool/dpaa2: not in enabled drivers build config 01:14:17.604 mempool/octeontx: not in enabled drivers build config 01:14:17.604 mempool/stack: not in enabled drivers build config 01:14:17.604 dma/cnxk: not in enabled drivers build config 01:14:17.604 dma/dpaa: not in enabled drivers build config 01:14:17.604 dma/dpaa2: not in enabled drivers build config 01:14:17.604 dma/hisilicon: not in enabled drivers build config 01:14:17.604 dma/idxd: not in enabled drivers build config 01:14:17.604 dma/ioat: not in enabled drivers build config 01:14:17.604 dma/skeleton: not in enabled drivers build config 01:14:17.604 net/af_packet: not in enabled drivers build config 01:14:17.604 net/af_xdp: not in enabled drivers build config 01:14:17.604 net/ark: not in enabled drivers build config 01:14:17.604 net/atlantic: not in enabled drivers build config 01:14:17.604 net/avp: not in enabled drivers build config 01:14:17.604 net/axgbe: not in enabled drivers build config 01:14:17.604 net/bnx2x: not in enabled drivers build config 01:14:17.604 net/bnxt: not in enabled drivers build config 01:14:17.604 net/bonding: not in enabled drivers build config 01:14:17.604 net/cnxk: not in enabled drivers build config 01:14:17.604 net/cpfl: not in enabled drivers build config 01:14:17.604 net/cxgbe: not in enabled drivers build config 01:14:17.604 net/dpaa: not in enabled drivers build config 01:14:17.604 net/dpaa2: not in enabled drivers build config 01:14:17.604 net/e1000: not in enabled drivers build config 01:14:17.604 net/ena: not in enabled drivers build config 01:14:17.604 net/enetc: not in enabled drivers build config 01:14:17.604 net/enetfec: not in enabled drivers build config 01:14:17.604 net/enic: not in enabled drivers build config 01:14:17.604 net/failsafe: not in enabled drivers build config 01:14:17.604 net/fm10k: not in enabled drivers build config 01:14:17.604 net/gve: not in enabled drivers build config 01:14:17.604 net/hinic: not in enabled drivers build config 01:14:17.604 net/hns3: not in enabled drivers build config 01:14:17.604 net/i40e: not in enabled drivers build config 01:14:17.604 net/iavf: not in enabled drivers build config 01:14:17.604 net/ice: not in enabled drivers build config 01:14:17.604 net/idpf: not in enabled drivers build config 01:14:17.604 net/igc: not in enabled drivers build config 01:14:17.604 net/ionic: not in enabled drivers build config 01:14:17.604 net/ipn3ke: not in enabled drivers build config 01:14:17.604 net/ixgbe: not in enabled drivers build config 01:14:17.604 net/mana: not in enabled drivers build config 01:14:17.604 net/memif: not in enabled drivers build config 01:14:17.604 net/mlx4: not in enabled drivers build config 01:14:17.604 net/mlx5: not in enabled drivers build config 01:14:17.604 net/mvneta: not in enabled drivers build config 01:14:17.604 net/mvpp2: not in enabled drivers build config 01:14:17.604 net/netvsc: not in enabled drivers build config 01:14:17.604 net/nfb: not in enabled drivers build config 01:14:17.604 net/nfp: not in enabled drivers build config 01:14:17.604 net/ngbe: not in enabled drivers build config 01:14:17.604 net/null: not in enabled drivers build config 01:14:17.604 net/octeontx: not in enabled drivers build config 01:14:17.604 net/octeon_ep: not in enabled drivers build config 01:14:17.604 net/pcap: not in enabled drivers build config 01:14:17.604 net/pfe: not in enabled drivers build config 01:14:17.604 net/qede: not in enabled drivers build config 01:14:17.604 net/ring: not in enabled drivers build config 01:14:17.604 net/sfc: not in enabled drivers build config 01:14:17.604 net/softnic: not in enabled drivers build config 01:14:17.604 net/tap: not in enabled drivers build config 01:14:17.604 net/thunderx: not in enabled drivers build config 01:14:17.604 net/txgbe: not in enabled drivers build config 01:14:17.604 net/vdev_netvsc: not in enabled drivers build config 01:14:17.604 net/vhost: not in enabled drivers build config 01:14:17.604 net/virtio: not in enabled drivers build config 01:14:17.604 net/vmxnet3: not in enabled drivers build config 01:14:17.604 raw/*: missing internal dependency, "rawdev" 01:14:17.604 crypto/armv8: not in enabled drivers build config 01:14:17.604 crypto/bcmfs: not in enabled drivers build config 01:14:17.604 crypto/caam_jr: not in enabled drivers build config 01:14:17.604 crypto/ccp: not in enabled drivers build config 01:14:17.604 crypto/cnxk: not in enabled drivers build config 01:14:17.604 crypto/dpaa_sec: not in enabled drivers build config 01:14:17.604 crypto/dpaa2_sec: not in enabled drivers build config 01:14:17.604 crypto/ipsec_mb: not in enabled drivers build config 01:14:17.604 crypto/mlx5: not in enabled drivers build config 01:14:17.604 crypto/mvsam: not in enabled drivers build config 01:14:17.604 crypto/nitrox: not in enabled drivers build config 01:14:17.604 crypto/null: not in enabled drivers build config 01:14:17.604 crypto/octeontx: not in enabled drivers build config 01:14:17.604 crypto/openssl: not in enabled drivers build config 01:14:17.604 crypto/scheduler: not in enabled drivers build config 01:14:17.604 crypto/uadk: not in enabled drivers build config 01:14:17.604 crypto/virtio: not in enabled drivers build config 01:14:17.604 compress/isal: not in enabled drivers build config 01:14:17.604 compress/mlx5: not in enabled drivers build config 01:14:17.604 compress/nitrox: not in enabled drivers build config 01:14:17.604 compress/octeontx: not in enabled drivers build config 01:14:17.604 compress/zlib: not in enabled drivers build config 01:14:17.604 regex/*: missing internal dependency, "regexdev" 01:14:17.604 ml/*: missing internal dependency, "mldev" 01:14:17.604 vdpa/ifc: not in enabled drivers build config 01:14:17.604 vdpa/mlx5: not in enabled drivers build config 01:14:17.604 vdpa/nfp: not in enabled drivers build config 01:14:17.604 vdpa/sfc: not in enabled drivers build config 01:14:17.604 event/*: missing internal dependency, "eventdev" 01:14:17.604 baseband/*: missing internal dependency, "bbdev" 01:14:17.604 gpu/*: missing internal dependency, "gpudev" 01:14:17.604 01:14:17.604 01:14:17.863 Build targets in project: 84 01:14:17.863 01:14:17.863 DPDK 24.03.0 01:14:17.863 01:14:17.863 User defined options 01:14:17.863 buildtype : debug 01:14:17.863 default_library : shared 01:14:17.863 libdir : lib 01:14:17.863 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 01:14:17.863 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 01:14:17.863 c_link_args : 01:14:17.863 cpu_instruction_set: native 01:14:17.863 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 01:14:17.863 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 01:14:17.863 enable_docs : false 01:14:17.863 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 01:14:17.863 enable_kmods : false 01:14:17.863 max_lcores : 128 01:14:17.863 tests : false 01:14:17.863 01:14:17.863 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 01:14:18.121 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 01:14:18.121 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 01:14:18.380 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 01:14:18.380 [3/267] Linking static target lib/librte_kvargs.a 01:14:18.380 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 01:14:18.380 [5/267] Linking static target lib/librte_log.a 01:14:18.380 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 01:14:18.380 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 01:14:18.380 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 01:14:18.637 [9/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 01:14:18.637 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 01:14:18.637 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 01:14:18.637 [12/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 01:14:18.637 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 01:14:18.637 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 01:14:18.637 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 01:14:18.637 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 01:14:18.895 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 01:14:18.895 [18/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 01:14:18.895 [19/267] Linking static target lib/librte_telemetry.a 01:14:18.895 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 01:14:19.152 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 01:14:19.152 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 01:14:19.152 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 01:14:19.152 [24/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 01:14:19.152 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 01:14:19.152 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 01:14:19.152 [27/267] Linking target lib/librte_log.so.24.1 01:14:19.152 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 01:14:19.152 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 01:14:19.152 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 01:14:19.411 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 01:14:19.411 [32/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 01:14:19.411 [33/267] Linking target lib/librte_kvargs.so.24.1 01:14:19.411 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 01:14:19.411 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 01:14:19.411 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 01:14:19.411 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 01:14:19.411 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 01:14:19.411 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 01:14:19.411 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 01:14:19.411 [41/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 01:14:19.411 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 01:14:19.669 [43/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 01:14:19.669 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 01:14:19.669 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 01:14:19.669 [46/267] Linking target lib/librte_telemetry.so.24.1 01:14:19.669 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 01:14:19.669 [48/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 01:14:19.932 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 01:14:19.932 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 01:14:19.932 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 01:14:19.932 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 01:14:19.932 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 01:14:19.932 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 01:14:19.932 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 01:14:19.932 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 01:14:19.932 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 01:14:20.189 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 01:14:20.189 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 01:14:20.189 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 01:14:20.189 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 01:14:20.189 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 01:14:20.189 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 01:14:20.189 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 01:14:20.447 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 01:14:20.447 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 01:14:20.447 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 01:14:20.447 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 01:14:20.447 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 01:14:20.447 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 01:14:20.447 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 01:14:20.705 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 01:14:20.705 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 01:14:20.705 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 01:14:20.705 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 01:14:20.705 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 01:14:20.705 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 01:14:20.705 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 01:14:20.705 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 01:14:20.705 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 01:14:20.964 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 01:14:20.964 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 01:14:20.964 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 01:14:20.964 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 01:14:20.964 [85/267] Linking static target lib/librte_eal.a 01:14:20.964 [86/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 01:14:20.964 [87/267] Linking static target lib/librte_ring.a 01:14:21.223 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 01:14:21.223 [89/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 01:14:21.223 [90/267] Linking static target lib/librte_rcu.a 01:14:21.223 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 01:14:21.223 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 01:14:21.223 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 01:14:21.223 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 01:14:21.223 [95/267] Linking static target lib/librte_mempool.a 01:14:21.481 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 01:14:21.481 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 01:14:21.481 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 01:14:21.481 [99/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 01:14:21.481 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 01:14:21.481 [101/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 01:14:21.481 [102/267] Linking static target lib/librte_mbuf.a 01:14:21.481 [103/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 01:14:21.481 [104/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 01:14:21.740 [105/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 01:14:21.740 [106/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 01:14:21.740 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 01:14:21.740 [108/267] Linking static target lib/librte_net.a 01:14:21.740 [109/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 01:14:21.740 [110/267] Linking static target lib/librte_meter.a 01:14:21.998 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 01:14:21.998 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 01:14:21.998 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 01:14:21.998 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 01:14:21.998 [115/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 01:14:21.998 [116/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 01:14:22.256 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 01:14:22.256 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 01:14:22.256 [119/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 01:14:22.256 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 01:14:22.515 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 01:14:22.515 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 01:14:22.515 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 01:14:22.515 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 01:14:22.773 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 01:14:22.773 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 01:14:22.773 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 01:14:22.773 [128/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 01:14:22.773 [129/267] Linking static target lib/librte_pci.a 01:14:22.773 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 01:14:22.773 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 01:14:22.773 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 01:14:22.773 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 01:14:22.773 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 01:14:22.773 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 01:14:22.773 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 01:14:22.773 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 01:14:22.773 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 01:14:22.773 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 01:14:22.773 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 01:14:22.773 [141/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 01:14:22.773 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 01:14:22.773 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 01:14:23.030 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 01:14:23.030 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 01:14:23.030 [146/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 01:14:23.030 [147/267] Linking static target lib/librte_ethdev.a 01:14:23.030 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 01:14:23.030 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 01:14:23.030 [150/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 01:14:23.030 [151/267] Linking static target lib/librte_cmdline.a 01:14:23.288 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 01:14:23.288 [153/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 01:14:23.289 [154/267] Linking static target lib/librte_timer.a 01:14:23.289 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 01:14:23.289 [156/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 01:14:23.289 [157/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 01:14:23.289 [158/267] Linking static target lib/librte_hash.a 01:14:23.289 [159/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 01:14:23.289 [160/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 01:14:23.289 [161/267] Linking static target lib/librte_compressdev.a 01:14:23.547 [162/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 01:14:23.547 [163/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 01:14:23.547 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 01:14:23.547 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 01:14:23.547 [166/267] Linking static target lib/librte_dmadev.a 01:14:23.804 [167/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 01:14:23.804 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 01:14:23.804 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 01:14:23.804 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 01:14:23.804 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 01:14:24.061 [172/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 01:14:24.061 [173/267] Linking static target lib/librte_cryptodev.a 01:14:24.061 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 01:14:24.061 [175/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 01:14:24.061 [176/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 01:14:24.061 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 01:14:24.061 [178/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 01:14:24.061 [179/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 01:14:24.318 [180/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 01:14:24.318 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 01:14:24.318 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 01:14:24.318 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 01:14:24.575 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 01:14:24.575 [185/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 01:14:24.575 [186/267] Linking static target lib/librte_power.a 01:14:24.575 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 01:14:24.575 [188/267] Linking static target lib/librte_reorder.a 01:14:24.575 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 01:14:24.575 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 01:14:24.575 [191/267] Linking static target lib/librte_security.a 01:14:24.832 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 01:14:24.832 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 01:14:24.832 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 01:14:25.088 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 01:14:25.088 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 01:14:25.088 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 01:14:25.088 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 01:14:25.345 [199/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 01:14:25.345 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 01:14:25.345 [201/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 01:14:25.345 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 01:14:25.345 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 01:14:25.603 [204/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 01:14:25.603 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 01:14:25.603 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 01:14:25.603 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 01:14:25.603 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 01:14:25.861 [209/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 01:14:25.861 [210/267] Linking static target drivers/libtmp_rte_mempool_ring.a 01:14:25.861 [211/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 01:14:25.861 [212/267] Linking static target drivers/libtmp_rte_bus_pci.a 01:14:25.861 [213/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 01:14:25.861 [214/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 01:14:25.861 [215/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 01:14:25.861 [216/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 01:14:25.861 [217/267] Linking static target drivers/librte_bus_vdev.a 01:14:25.861 [218/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 01:14:25.861 [219/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 01:14:25.861 [220/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 01:14:25.861 [221/267] Linking static target drivers/librte_mempool_ring.a 01:14:25.861 [222/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 01:14:25.861 [223/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 01:14:25.861 [224/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 01:14:25.861 [225/267] Linking static target drivers/librte_bus_pci.a 01:14:26.119 [226/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 01:14:26.377 [227/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 01:14:26.635 [228/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 01:14:26.635 [229/267] Linking static target lib/librte_vhost.a 01:14:27.569 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 01:14:27.827 [231/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 01:14:27.827 [232/267] Linking target lib/librte_eal.so.24.1 01:14:27.827 [233/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 01:14:28.085 [234/267] Linking target lib/librte_pci.so.24.1 01:14:28.085 [235/267] Linking target lib/librte_meter.so.24.1 01:14:28.085 [236/267] Linking target lib/librte_dmadev.so.24.1 01:14:28.085 [237/267] Linking target lib/librte_timer.so.24.1 01:14:28.085 [238/267] Linking target drivers/librte_bus_vdev.so.24.1 01:14:28.085 [239/267] Linking target lib/librte_ring.so.24.1 01:14:28.085 [240/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 01:14:28.085 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 01:14:28.085 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 01:14:28.085 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 01:14:28.085 [244/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 01:14:28.085 [245/267] Linking target lib/librte_rcu.so.24.1 01:14:28.085 [246/267] Linking target drivers/librte_bus_pci.so.24.1 01:14:28.085 [247/267] Linking target lib/librte_mempool.so.24.1 01:14:28.085 [248/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 01:14:28.085 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 01:14:28.344 [250/267] Linking target drivers/librte_mempool_ring.so.24.1 01:14:28.344 [251/267] Linking target lib/librte_mbuf.so.24.1 01:14:28.344 [252/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 01:14:28.344 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 01:14:28.344 [254/267] Linking target lib/librte_net.so.24.1 01:14:28.344 [255/267] Linking target lib/librte_compressdev.so.24.1 01:14:28.344 [256/267] Linking target lib/librte_reorder.so.24.1 01:14:28.344 [257/267] Linking target lib/librte_cryptodev.so.24.1 01:14:28.608 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 01:14:28.608 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 01:14:28.608 [260/267] Linking target lib/librte_security.so.24.1 01:14:28.608 [261/267] Linking target lib/librte_cmdline.so.24.1 01:14:28.608 [262/267] Linking target lib/librte_hash.so.24.1 01:14:28.608 [263/267] Linking target lib/librte_ethdev.so.24.1 01:14:28.608 [264/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 01:14:28.608 [265/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 01:14:28.608 [266/267] Linking target lib/librte_power.so.24.1 01:14:28.608 [267/267] Linking target lib/librte_vhost.so.24.1 01:14:28.608 INFO: autodetecting backend as ninja 01:14:28.608 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 01:14:43.520 CC lib/log/log_flags.o 01:14:43.520 CC lib/log/log.o 01:14:43.520 CC lib/ut/ut.o 01:14:43.520 CC lib/log/log_deprecated.o 01:14:43.520 CC lib/ut_mock/mock.o 01:14:43.520 LIB libspdk_ut.a 01:14:43.521 LIB libspdk_ut_mock.a 01:14:43.521 SO libspdk_ut_mock.so.6.0 01:14:43.521 SO libspdk_ut.so.2.0 01:14:43.521 LIB libspdk_log.a 01:14:43.521 SYMLINK libspdk_ut.so 01:14:43.521 SYMLINK libspdk_ut_mock.so 01:14:43.521 SO libspdk_log.so.7.1 01:14:43.521 SYMLINK libspdk_log.so 01:14:43.521 CC lib/ioat/ioat.o 01:14:43.521 CXX lib/trace_parser/trace.o 01:14:43.521 CC lib/util/base64.o 01:14:43.521 CC lib/util/bit_array.o 01:14:43.521 CC lib/dma/dma.o 01:14:43.521 CC lib/util/cpuset.o 01:14:43.521 CC lib/util/crc16.o 01:14:43.521 CC lib/util/crc32.o 01:14:43.521 CC lib/util/crc32c.o 01:14:43.521 CC lib/vfio_user/host/vfio_user_pci.o 01:14:43.521 CC lib/vfio_user/host/vfio_user.o 01:14:43.521 CC lib/util/crc32_ieee.o 01:14:43.521 CC lib/util/crc64.o 01:14:43.521 LIB libspdk_dma.a 01:14:43.521 CC lib/util/dif.o 01:14:43.521 SO libspdk_dma.so.5.0 01:14:43.521 CC lib/util/fd.o 01:14:43.521 SYMLINK libspdk_dma.so 01:14:43.521 CC lib/util/fd_group.o 01:14:43.521 LIB libspdk_ioat.a 01:14:43.521 CC lib/util/file.o 01:14:43.521 CC lib/util/hexlify.o 01:14:43.521 CC lib/util/iov.o 01:14:43.521 SO libspdk_ioat.so.7.0 01:14:43.521 CC lib/util/math.o 01:14:43.521 LIB libspdk_vfio_user.a 01:14:43.521 SYMLINK libspdk_ioat.so 01:14:43.521 CC lib/util/net.o 01:14:43.521 CC lib/util/pipe.o 01:14:43.521 SO libspdk_vfio_user.so.5.0 01:14:43.521 CC lib/util/strerror_tls.o 01:14:43.521 SYMLINK libspdk_vfio_user.so 01:14:43.521 CC lib/util/string.o 01:14:43.521 CC lib/util/uuid.o 01:14:43.521 CC lib/util/xor.o 01:14:43.521 CC lib/util/zipf.o 01:14:43.521 CC lib/util/md5.o 01:14:43.521 LIB libspdk_util.a 01:14:43.521 SO libspdk_util.so.10.1 01:14:43.779 LIB libspdk_trace_parser.a 01:14:43.779 SO libspdk_trace_parser.so.6.0 01:14:43.779 SYMLINK libspdk_util.so 01:14:43.779 SYMLINK libspdk_trace_parser.so 01:14:43.779 CC lib/env_dpdk/env.o 01:14:43.779 CC lib/env_dpdk/memory.o 01:14:43.779 CC lib/env_dpdk/threads.o 01:14:43.779 CC lib/env_dpdk/init.o 01:14:43.779 CC lib/idxd/idxd.o 01:14:43.779 CC lib/env_dpdk/pci.o 01:14:43.779 CC lib/conf/conf.o 01:14:43.779 CC lib/rdma_utils/rdma_utils.o 01:14:43.779 CC lib/json/json_parse.o 01:14:43.779 CC lib/vmd/vmd.o 01:14:44.037 CC lib/vmd/led.o 01:14:44.037 LIB libspdk_conf.a 01:14:44.037 CC lib/json/json_util.o 01:14:44.037 SO libspdk_conf.so.6.0 01:14:44.037 CC lib/idxd/idxd_user.o 01:14:44.037 LIB libspdk_rdma_utils.a 01:14:44.037 SYMLINK libspdk_conf.so 01:14:44.037 CC lib/env_dpdk/pci_ioat.o 01:14:44.037 SO libspdk_rdma_utils.so.1.0 01:14:44.037 CC lib/idxd/idxd_kernel.o 01:14:44.037 SYMLINK libspdk_rdma_utils.so 01:14:44.037 CC lib/env_dpdk/pci_virtio.o 01:14:44.037 CC lib/json/json_write.o 01:14:44.295 CC lib/env_dpdk/pci_vmd.o 01:14:44.295 CC lib/env_dpdk/pci_idxd.o 01:14:44.295 CC lib/env_dpdk/pci_event.o 01:14:44.295 CC lib/env_dpdk/sigbus_handler.o 01:14:44.295 CC lib/env_dpdk/pci_dpdk.o 01:14:44.295 CC lib/env_dpdk/pci_dpdk_2207.o 01:14:44.295 CC lib/env_dpdk/pci_dpdk_2211.o 01:14:44.295 LIB libspdk_idxd.a 01:14:44.295 SO libspdk_idxd.so.12.1 01:14:44.295 LIB libspdk_vmd.a 01:14:44.295 SYMLINK libspdk_idxd.so 01:14:44.295 LIB libspdk_json.a 01:14:44.295 SO libspdk_vmd.so.6.0 01:14:44.295 SO libspdk_json.so.6.0 01:14:44.295 SYMLINK libspdk_vmd.so 01:14:44.553 CC lib/rdma_provider/common.o 01:14:44.553 CC lib/rdma_provider/rdma_provider_verbs.o 01:14:44.553 SYMLINK libspdk_json.so 01:14:44.553 LIB libspdk_rdma_provider.a 01:14:44.553 SO libspdk_rdma_provider.so.7.0 01:14:44.553 CC lib/jsonrpc/jsonrpc_server.o 01:14:44.553 CC lib/jsonrpc/jsonrpc_client.o 01:14:44.553 CC lib/jsonrpc/jsonrpc_server_tcp.o 01:14:44.553 CC lib/jsonrpc/jsonrpc_client_tcp.o 01:14:44.553 SYMLINK libspdk_rdma_provider.so 01:14:44.810 LIB libspdk_env_dpdk.a 01:14:44.810 LIB libspdk_jsonrpc.a 01:14:44.810 SO libspdk_jsonrpc.so.6.0 01:14:44.810 SO libspdk_env_dpdk.so.15.1 01:14:45.068 SYMLINK libspdk_jsonrpc.so 01:14:45.068 SYMLINK libspdk_env_dpdk.so 01:14:45.068 CC lib/rpc/rpc.o 01:14:45.326 LIB libspdk_rpc.a 01:14:45.326 SO libspdk_rpc.so.6.0 01:14:45.326 SYMLINK libspdk_rpc.so 01:14:45.583 CC lib/keyring/keyring_rpc.o 01:14:45.583 CC lib/trace/trace.o 01:14:45.583 CC lib/trace/trace_rpc.o 01:14:45.583 CC lib/trace/trace_flags.o 01:14:45.583 CC lib/keyring/keyring.o 01:14:45.583 CC lib/notify/notify.o 01:14:45.583 CC lib/notify/notify_rpc.o 01:14:45.842 LIB libspdk_notify.a 01:14:45.842 LIB libspdk_keyring.a 01:14:45.842 SO libspdk_notify.so.6.0 01:14:45.842 SO libspdk_keyring.so.2.0 01:14:45.842 LIB libspdk_trace.a 01:14:45.842 SYMLINK libspdk_notify.so 01:14:45.842 SYMLINK libspdk_keyring.so 01:14:45.842 SO libspdk_trace.so.11.0 01:14:45.842 SYMLINK libspdk_trace.so 01:14:46.099 CC lib/sock/sock.o 01:14:46.099 CC lib/sock/sock_rpc.o 01:14:46.099 CC lib/thread/thread.o 01:14:46.099 CC lib/thread/iobuf.o 01:14:46.357 LIB libspdk_sock.a 01:14:46.357 SO libspdk_sock.so.10.0 01:14:46.357 SYMLINK libspdk_sock.so 01:14:46.616 CC lib/nvme/nvme_ctrlr_cmd.o 01:14:46.616 CC lib/nvme/nvme_ctrlr.o 01:14:46.616 CC lib/nvme/nvme_ns_cmd.o 01:14:46.616 CC lib/nvme/nvme_fabric.o 01:14:46.616 CC lib/nvme/nvme_pcie_common.o 01:14:46.616 CC lib/nvme/nvme.o 01:14:46.616 CC lib/nvme/nvme_ns.o 01:14:46.616 CC lib/nvme/nvme_qpair.o 01:14:46.616 CC lib/nvme/nvme_pcie.o 01:14:47.182 CC lib/nvme/nvme_quirks.o 01:14:47.182 CC lib/nvme/nvme_transport.o 01:14:47.182 CC lib/nvme/nvme_discovery.o 01:14:47.442 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 01:14:47.442 CC lib/nvme/nvme_ns_ocssd_cmd.o 01:14:47.442 CC lib/nvme/nvme_tcp.o 01:14:47.442 LIB libspdk_thread.a 01:14:47.442 SO libspdk_thread.so.11.0 01:14:47.442 CC lib/nvme/nvme_opal.o 01:14:47.442 SYMLINK libspdk_thread.so 01:14:47.442 CC lib/nvme/nvme_io_msg.o 01:14:47.442 CC lib/nvme/nvme_poll_group.o 01:14:47.442 CC lib/nvme/nvme_zns.o 01:14:47.704 CC lib/nvme/nvme_stubs.o 01:14:47.704 CC lib/nvme/nvme_auth.o 01:14:47.961 CC lib/nvme/nvme_cuse.o 01:14:47.961 CC lib/nvme/nvme_rdma.o 01:14:47.961 CC lib/blob/blobstore.o 01:14:47.961 CC lib/accel/accel.o 01:14:47.961 CC lib/init/json_config.o 01:14:47.961 CC lib/virtio/virtio.o 01:14:48.219 CC lib/init/subsystem.o 01:14:48.219 CC lib/init/subsystem_rpc.o 01:14:48.219 CC lib/virtio/virtio_vhost_user.o 01:14:48.219 CC lib/virtio/virtio_vfio_user.o 01:14:48.476 CC lib/init/rpc.o 01:14:48.476 CC lib/virtio/virtio_pci.o 01:14:48.476 CC lib/blob/request.o 01:14:48.476 LIB libspdk_init.a 01:14:48.476 CC lib/blob/zeroes.o 01:14:48.476 SO libspdk_init.so.6.0 01:14:48.476 CC lib/accel/accel_rpc.o 01:14:48.476 CC lib/accel/accel_sw.o 01:14:48.476 CC lib/fsdev/fsdev.o 01:14:48.476 SYMLINK libspdk_init.so 01:14:48.476 CC lib/blob/blob_bs_dev.o 01:14:48.734 LIB libspdk_virtio.a 01:14:48.734 CC lib/fsdev/fsdev_io.o 01:14:48.734 CC lib/fsdev/fsdev_rpc.o 01:14:48.734 SO libspdk_virtio.so.7.0 01:14:48.734 SYMLINK libspdk_virtio.so 01:14:48.734 CC lib/event/app.o 01:14:48.734 CC lib/event/reactor.o 01:14:48.734 CC lib/event/log_rpc.o 01:14:48.734 CC lib/event/app_rpc.o 01:14:48.734 LIB libspdk_accel.a 01:14:48.734 CC lib/event/scheduler_static.o 01:14:48.734 SO libspdk_accel.so.16.0 01:14:48.991 SYMLINK libspdk_accel.so 01:14:48.991 LIB libspdk_nvme.a 01:14:48.991 CC lib/bdev/bdev.o 01:14:48.991 CC lib/bdev/bdev_rpc.o 01:14:48.991 CC lib/bdev/bdev_zone.o 01:14:48.992 CC lib/bdev/part.o 01:14:48.992 CC lib/bdev/scsi_nvme.o 01:14:48.992 SO libspdk_nvme.so.15.0 01:14:48.992 LIB libspdk_fsdev.a 01:14:48.992 SO libspdk_fsdev.so.2.0 01:14:48.992 LIB libspdk_event.a 01:14:49.249 SO libspdk_event.so.14.0 01:14:49.249 SYMLINK libspdk_fsdev.so 01:14:49.249 SYMLINK libspdk_event.so 01:14:49.249 SYMLINK libspdk_nvme.so 01:14:49.249 CC lib/fuse_dispatcher/fuse_dispatcher.o 01:14:49.814 LIB libspdk_fuse_dispatcher.a 01:14:49.814 SO libspdk_fuse_dispatcher.so.1.0 01:14:50.072 SYMLINK libspdk_fuse_dispatcher.so 01:14:50.330 LIB libspdk_blob.a 01:14:50.330 SO libspdk_blob.so.12.0 01:14:50.330 SYMLINK libspdk_blob.so 01:14:50.588 CC lib/blobfs/blobfs.o 01:14:50.588 CC lib/blobfs/tree.o 01:14:50.588 CC lib/lvol/lvol.o 01:14:50.846 LIB libspdk_bdev.a 01:14:51.104 SO libspdk_bdev.so.17.0 01:14:51.104 SYMLINK libspdk_bdev.so 01:14:51.104 CC lib/nbd/nbd.o 01:14:51.104 CC lib/nbd/nbd_rpc.o 01:14:51.104 CC lib/nvmf/ctrlr.o 01:14:51.104 CC lib/nvmf/ctrlr_discovery.o 01:14:51.104 CC lib/nvmf/ctrlr_bdev.o 01:14:51.104 CC lib/ftl/ftl_core.o 01:14:51.104 CC lib/scsi/dev.o 01:14:51.104 CC lib/ublk/ublk.o 01:14:51.104 LIB libspdk_blobfs.a 01:14:51.361 SO libspdk_blobfs.so.11.0 01:14:51.362 SYMLINK libspdk_blobfs.so 01:14:51.362 CC lib/ublk/ublk_rpc.o 01:14:51.362 LIB libspdk_lvol.a 01:14:51.362 SO libspdk_lvol.so.11.0 01:14:51.362 CC lib/nvmf/subsystem.o 01:14:51.362 CC lib/scsi/lun.o 01:14:51.362 SYMLINK libspdk_lvol.so 01:14:51.362 CC lib/ftl/ftl_init.o 01:14:51.362 CC lib/nvmf/nvmf.o 01:14:51.362 CC lib/nvmf/nvmf_rpc.o 01:14:51.619 LIB libspdk_nbd.a 01:14:51.619 SO libspdk_nbd.so.7.0 01:14:51.619 CC lib/ftl/ftl_layout.o 01:14:51.619 CC lib/nvmf/transport.o 01:14:51.620 SYMLINK libspdk_nbd.so 01:14:51.620 CC lib/nvmf/tcp.o 01:14:51.620 CC lib/scsi/port.o 01:14:51.620 LIB libspdk_ublk.a 01:14:51.620 CC lib/scsi/scsi.o 01:14:51.620 SO libspdk_ublk.so.3.0 01:14:51.620 CC lib/nvmf/stubs.o 01:14:51.877 SYMLINK libspdk_ublk.so 01:14:51.877 CC lib/nvmf/mdns_server.o 01:14:51.877 CC lib/ftl/ftl_debug.o 01:14:51.877 CC lib/scsi/scsi_bdev.o 01:14:51.877 CC lib/nvmf/rdma.o 01:14:52.135 CC lib/ftl/ftl_io.o 01:14:52.135 CC lib/nvmf/auth.o 01:14:52.135 CC lib/scsi/scsi_pr.o 01:14:52.135 CC lib/scsi/scsi_rpc.o 01:14:52.135 CC lib/scsi/task.o 01:14:52.135 CC lib/ftl/ftl_sb.o 01:14:52.135 CC lib/ftl/ftl_l2p.o 01:14:52.135 CC lib/ftl/ftl_l2p_flat.o 01:14:52.135 CC lib/ftl/ftl_nv_cache.o 01:14:52.395 CC lib/ftl/ftl_band.o 01:14:52.395 LIB libspdk_scsi.a 01:14:52.395 CC lib/ftl/ftl_band_ops.o 01:14:52.395 SO libspdk_scsi.so.9.0 01:14:52.395 CC lib/ftl/ftl_writer.o 01:14:52.395 CC lib/ftl/ftl_rq.o 01:14:52.395 SYMLINK libspdk_scsi.so 01:14:52.395 CC lib/ftl/ftl_reloc.o 01:14:52.656 CC lib/ftl/ftl_l2p_cache.o 01:14:52.656 CC lib/ftl/ftl_p2l.o 01:14:52.656 CC lib/ftl/ftl_p2l_log.o 01:14:52.656 CC lib/ftl/mngt/ftl_mngt.o 01:14:52.656 CC lib/ftl/mngt/ftl_mngt_bdev.o 01:14:52.656 CC lib/ftl/mngt/ftl_mngt_shutdown.o 01:14:52.656 CC lib/ftl/mngt/ftl_mngt_startup.o 01:14:52.656 CC lib/ftl/mngt/ftl_mngt_md.o 01:14:52.656 CC lib/ftl/mngt/ftl_mngt_misc.o 01:14:52.917 CC lib/ftl/mngt/ftl_mngt_ioch.o 01:14:52.917 CC lib/ftl/mngt/ftl_mngt_l2p.o 01:14:52.917 CC lib/ftl/mngt/ftl_mngt_band.o 01:14:52.917 CC lib/iscsi/conn.o 01:14:52.917 CC lib/ftl/mngt/ftl_mngt_self_test.o 01:14:52.917 CC lib/vhost/vhost.o 01:14:52.917 CC lib/ftl/mngt/ftl_mngt_p2l.o 01:14:52.917 CC lib/vhost/vhost_rpc.o 01:14:52.917 CC lib/ftl/mngt/ftl_mngt_recovery.o 01:14:52.917 CC lib/vhost/vhost_scsi.o 01:14:52.917 CC lib/vhost/vhost_blk.o 01:14:53.175 CC lib/ftl/mngt/ftl_mngt_upgrade.o 01:14:53.175 CC lib/vhost/rte_vhost_user.o 01:14:53.175 CC lib/ftl/utils/ftl_conf.o 01:14:53.175 CC lib/iscsi/init_grp.o 01:14:53.432 CC lib/ftl/utils/ftl_md.o 01:14:53.432 CC lib/ftl/utils/ftl_mempool.o 01:14:53.432 CC lib/iscsi/iscsi.o 01:14:53.432 CC lib/iscsi/param.o 01:14:53.432 CC lib/ftl/utils/ftl_bitmap.o 01:14:53.432 CC lib/ftl/utils/ftl_property.o 01:14:53.432 LIB libspdk_nvmf.a 01:14:53.689 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 01:14:53.689 SO libspdk_nvmf.so.20.0 01:14:53.689 CC lib/ftl/upgrade/ftl_layout_upgrade.o 01:14:53.689 CC lib/ftl/upgrade/ftl_sb_upgrade.o 01:14:53.689 CC lib/iscsi/portal_grp.o 01:14:53.689 CC lib/iscsi/tgt_node.o 01:14:53.689 CC lib/iscsi/iscsi_subsystem.o 01:14:53.689 LIB libspdk_vhost.a 01:14:53.689 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 01:14:53.689 SYMLINK libspdk_nvmf.so 01:14:53.689 CC lib/ftl/upgrade/ftl_band_upgrade.o 01:14:53.689 SO libspdk_vhost.so.8.0 01:14:53.946 CC lib/iscsi/iscsi_rpc.o 01:14:53.946 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 01:14:53.946 CC lib/iscsi/task.o 01:14:53.946 SYMLINK libspdk_vhost.so 01:14:53.946 CC lib/ftl/upgrade/ftl_trim_upgrade.o 01:14:53.946 CC lib/ftl/upgrade/ftl_sb_v3.o 01:14:53.946 CC lib/ftl/upgrade/ftl_sb_v5.o 01:14:53.946 CC lib/ftl/nvc/ftl_nvc_dev.o 01:14:53.946 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 01:14:53.946 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 01:14:54.204 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 01:14:54.204 CC lib/ftl/base/ftl_base_dev.o 01:14:54.204 CC lib/ftl/base/ftl_base_bdev.o 01:14:54.204 CC lib/ftl/ftl_trace.o 01:14:54.204 LIB libspdk_ftl.a 01:14:54.480 LIB libspdk_iscsi.a 01:14:54.480 SO libspdk_iscsi.so.8.0 01:14:54.480 SO libspdk_ftl.so.9.0 01:14:54.480 SYMLINK libspdk_iscsi.so 01:14:54.737 SYMLINK libspdk_ftl.so 01:14:54.995 CC module/env_dpdk/env_dpdk_rpc.o 01:14:54.995 CC module/accel/ioat/accel_ioat.o 01:14:54.995 CC module/sock/posix/posix.o 01:14:54.995 CC module/accel/error/accel_error.o 01:14:54.995 CC module/keyring/file/keyring.o 01:14:54.995 CC module/accel/iaa/accel_iaa.o 01:14:54.995 CC module/blob/bdev/blob_bdev.o 01:14:54.995 CC module/fsdev/aio/fsdev_aio.o 01:14:54.995 CC module/accel/dsa/accel_dsa.o 01:14:54.995 CC module/scheduler/dynamic/scheduler_dynamic.o 01:14:54.995 LIB libspdk_env_dpdk_rpc.a 01:14:54.995 SO libspdk_env_dpdk_rpc.so.6.0 01:14:54.995 SYMLINK libspdk_env_dpdk_rpc.so 01:14:54.995 CC module/accel/error/accel_error_rpc.o 01:14:54.995 CC module/keyring/file/keyring_rpc.o 01:14:54.995 CC module/accel/ioat/accel_ioat_rpc.o 01:14:55.252 CC module/accel/dsa/accel_dsa_rpc.o 01:14:55.252 LIB libspdk_scheduler_dynamic.a 01:14:55.252 CC module/accel/iaa/accel_iaa_rpc.o 01:14:55.252 SO libspdk_scheduler_dynamic.so.4.0 01:14:55.252 LIB libspdk_blob_bdev.a 01:14:55.252 LIB libspdk_accel_error.a 01:14:55.252 SYMLINK libspdk_scheduler_dynamic.so 01:14:55.252 CC module/fsdev/aio/fsdev_aio_rpc.o 01:14:55.252 SO libspdk_blob_bdev.so.12.0 01:14:55.252 LIB libspdk_keyring_file.a 01:14:55.252 SO libspdk_accel_error.so.2.0 01:14:55.252 LIB libspdk_accel_ioat.a 01:14:55.252 SO libspdk_keyring_file.so.2.0 01:14:55.252 LIB libspdk_accel_iaa.a 01:14:55.252 LIB libspdk_accel_dsa.a 01:14:55.252 SO libspdk_accel_ioat.so.6.0 01:14:55.252 SO libspdk_accel_iaa.so.3.0 01:14:55.252 SYMLINK libspdk_blob_bdev.so 01:14:55.252 SO libspdk_accel_dsa.so.5.0 01:14:55.252 SYMLINK libspdk_accel_error.so 01:14:55.252 SYMLINK libspdk_keyring_file.so 01:14:55.252 SYMLINK libspdk_accel_iaa.so 01:14:55.253 SYMLINK libspdk_accel_ioat.so 01:14:55.253 CC module/fsdev/aio/linux_aio_mgr.o 01:14:55.253 SYMLINK libspdk_accel_dsa.so 01:14:55.253 CC module/scheduler/dpdk_governor/dpdk_governor.o 01:14:55.510 CC module/keyring/linux/keyring.o 01:14:55.510 CC module/scheduler/gscheduler/gscheduler.o 01:14:55.510 LIB libspdk_scheduler_dpdk_governor.a 01:14:55.510 LIB libspdk_fsdev_aio.a 01:14:55.510 SO libspdk_scheduler_dpdk_governor.so.4.0 01:14:55.510 CC module/blobfs/bdev/blobfs_bdev.o 01:14:55.510 CC module/bdev/gpt/gpt.o 01:14:55.510 CC module/bdev/error/vbdev_error.o 01:14:55.510 CC module/bdev/delay/vbdev_delay.o 01:14:55.510 SO libspdk_fsdev_aio.so.1.0 01:14:55.510 LIB libspdk_sock_posix.a 01:14:55.510 CC module/bdev/lvol/vbdev_lvol.o 01:14:55.510 SYMLINK libspdk_scheduler_dpdk_governor.so 01:14:55.510 SO libspdk_sock_posix.so.6.0 01:14:55.510 CC module/blobfs/bdev/blobfs_bdev_rpc.o 01:14:55.510 SYMLINK libspdk_fsdev_aio.so 01:14:55.510 CC module/keyring/linux/keyring_rpc.o 01:14:55.510 LIB libspdk_scheduler_gscheduler.a 01:14:55.510 CC module/bdev/error/vbdev_error_rpc.o 01:14:55.510 SO libspdk_scheduler_gscheduler.so.4.0 01:14:55.510 SYMLINK libspdk_sock_posix.so 01:14:55.510 CC module/bdev/gpt/vbdev_gpt.o 01:14:55.510 CC module/bdev/delay/vbdev_delay_rpc.o 01:14:55.767 CC module/bdev/lvol/vbdev_lvol_rpc.o 01:14:55.767 SYMLINK libspdk_scheduler_gscheduler.so 01:14:55.767 LIB libspdk_keyring_linux.a 01:14:55.767 LIB libspdk_blobfs_bdev.a 01:14:55.767 SO libspdk_keyring_linux.so.1.0 01:14:55.767 LIB libspdk_bdev_error.a 01:14:55.767 SO libspdk_blobfs_bdev.so.6.0 01:14:55.767 SO libspdk_bdev_error.so.6.0 01:14:55.767 SYMLINK libspdk_keyring_linux.so 01:14:55.767 SYMLINK libspdk_blobfs_bdev.so 01:14:55.767 SYMLINK libspdk_bdev_error.so 01:14:55.767 LIB libspdk_bdev_gpt.a 01:14:55.767 CC module/bdev/malloc/bdev_malloc.o 01:14:55.767 SO libspdk_bdev_gpt.so.6.0 01:14:55.767 LIB libspdk_bdev_delay.a 01:14:55.767 CC module/bdev/null/bdev_null.o 01:14:55.767 CC module/bdev/null/bdev_null_rpc.o 01:14:55.767 SO libspdk_bdev_delay.so.6.0 01:14:55.767 SYMLINK libspdk_bdev_gpt.so 01:14:55.767 CC module/bdev/malloc/bdev_malloc_rpc.o 01:14:56.025 SYMLINK libspdk_bdev_delay.so 01:14:56.025 CC module/bdev/nvme/bdev_nvme.o 01:14:56.025 CC module/bdev/raid/bdev_raid.o 01:14:56.025 LIB libspdk_bdev_lvol.a 01:14:56.025 CC module/bdev/passthru/vbdev_passthru.o 01:14:56.025 CC module/bdev/split/vbdev_split.o 01:14:56.025 SO libspdk_bdev_lvol.so.6.0 01:14:56.025 CC module/bdev/passthru/vbdev_passthru_rpc.o 01:14:56.025 LIB libspdk_bdev_null.a 01:14:56.025 SYMLINK libspdk_bdev_lvol.so 01:14:56.025 CC module/bdev/raid/bdev_raid_rpc.o 01:14:56.025 CC module/bdev/nvme/bdev_nvme_rpc.o 01:14:56.025 SO libspdk_bdev_null.so.6.0 01:14:56.025 CC module/bdev/zone_block/vbdev_zone_block.o 01:14:56.025 SYMLINK libspdk_bdev_null.so 01:14:56.025 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 01:14:56.025 CC module/bdev/raid/bdev_raid_sb.o 01:14:56.025 LIB libspdk_bdev_malloc.a 01:14:56.025 CC module/bdev/split/vbdev_split_rpc.o 01:14:56.025 SO libspdk_bdev_malloc.so.6.0 01:14:56.283 LIB libspdk_bdev_passthru.a 01:14:56.283 SYMLINK libspdk_bdev_malloc.so 01:14:56.283 CC module/bdev/raid/raid0.o 01:14:56.283 CC module/bdev/raid/raid1.o 01:14:56.283 SO libspdk_bdev_passthru.so.6.0 01:14:56.283 CC module/bdev/raid/concat.o 01:14:56.283 SYMLINK libspdk_bdev_passthru.so 01:14:56.283 LIB libspdk_bdev_split.a 01:14:56.283 SO libspdk_bdev_split.so.6.0 01:14:56.283 CC module/bdev/aio/bdev_aio.o 01:14:56.283 LIB libspdk_bdev_zone_block.a 01:14:56.283 SO libspdk_bdev_zone_block.so.6.0 01:14:56.283 SYMLINK libspdk_bdev_split.so 01:14:56.283 SYMLINK libspdk_bdev_zone_block.so 01:14:56.283 CC module/bdev/nvme/nvme_rpc.o 01:14:56.283 CC module/bdev/nvme/bdev_mdns_client.o 01:14:56.542 CC module/bdev/nvme/vbdev_opal.o 01:14:56.542 CC module/bdev/nvme/vbdev_opal_rpc.o 01:14:56.542 CC module/bdev/ftl/bdev_ftl.o 01:14:56.542 CC module/bdev/iscsi/bdev_iscsi.o 01:14:56.542 CC module/bdev/virtio/bdev_virtio_scsi.o 01:14:56.542 CC module/bdev/virtio/bdev_virtio_blk.o 01:14:56.542 CC module/bdev/virtio/bdev_virtio_rpc.o 01:14:56.542 CC module/bdev/aio/bdev_aio_rpc.o 01:14:56.542 CC module/bdev/ftl/bdev_ftl_rpc.o 01:14:56.542 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 01:14:56.542 CC module/bdev/iscsi/bdev_iscsi_rpc.o 01:14:56.800 LIB libspdk_bdev_raid.a 01:14:56.800 LIB libspdk_bdev_aio.a 01:14:56.800 LIB libspdk_bdev_ftl.a 01:14:56.800 SO libspdk_bdev_raid.so.6.0 01:14:56.800 SO libspdk_bdev_aio.so.6.0 01:14:56.800 LIB libspdk_bdev_iscsi.a 01:14:56.800 SO libspdk_bdev_ftl.so.6.0 01:14:56.800 SO libspdk_bdev_iscsi.so.6.0 01:14:56.800 SYMLINK libspdk_bdev_raid.so 01:14:56.800 SYMLINK libspdk_bdev_aio.so 01:14:56.800 SYMLINK libspdk_bdev_ftl.so 01:14:56.800 SYMLINK libspdk_bdev_iscsi.so 01:14:56.800 LIB libspdk_bdev_virtio.a 01:14:57.059 SO libspdk_bdev_virtio.so.6.0 01:14:57.059 SYMLINK libspdk_bdev_virtio.so 01:14:58.003 LIB libspdk_bdev_nvme.a 01:14:58.003 SO libspdk_bdev_nvme.so.7.1 01:14:58.264 SYMLINK libspdk_bdev_nvme.so 01:14:58.525 CC module/event/subsystems/iobuf/iobuf.o 01:14:58.525 CC module/event/subsystems/scheduler/scheduler.o 01:14:58.525 CC module/event/subsystems/iobuf/iobuf_rpc.o 01:14:58.525 CC module/event/subsystems/keyring/keyring.o 01:14:58.525 CC module/event/subsystems/vhost_blk/vhost_blk.o 01:14:58.525 CC module/event/subsystems/fsdev/fsdev.o 01:14:58.525 CC module/event/subsystems/vmd/vmd.o 01:14:58.525 CC module/event/subsystems/vmd/vmd_rpc.o 01:14:58.525 CC module/event/subsystems/sock/sock.o 01:14:58.525 LIB libspdk_event_keyring.a 01:14:58.525 SO libspdk_event_keyring.so.1.0 01:14:58.525 LIB libspdk_event_scheduler.a 01:14:58.525 LIB libspdk_event_fsdev.a 01:14:58.525 LIB libspdk_event_vmd.a 01:14:58.525 LIB libspdk_event_iobuf.a 01:14:58.796 LIB libspdk_event_sock.a 01:14:58.796 SO libspdk_event_scheduler.so.4.0 01:14:58.796 LIB libspdk_event_vhost_blk.a 01:14:58.796 SO libspdk_event_fsdev.so.1.0 01:14:58.796 SYMLINK libspdk_event_keyring.so 01:14:58.796 SO libspdk_event_iobuf.so.3.0 01:14:58.796 SO libspdk_event_vmd.so.6.0 01:14:58.796 SO libspdk_event_sock.so.5.0 01:14:58.796 SO libspdk_event_vhost_blk.so.3.0 01:14:58.796 SYMLINK libspdk_event_scheduler.so 01:14:58.796 SYMLINK libspdk_event_fsdev.so 01:14:58.796 SYMLINK libspdk_event_sock.so 01:14:58.796 SYMLINK libspdk_event_iobuf.so 01:14:58.796 SYMLINK libspdk_event_vhost_blk.so 01:14:58.796 SYMLINK libspdk_event_vmd.so 01:14:59.059 CC module/event/subsystems/accel/accel.o 01:14:59.059 LIB libspdk_event_accel.a 01:14:59.059 SO libspdk_event_accel.so.6.0 01:14:59.059 SYMLINK libspdk_event_accel.so 01:14:59.323 CC module/event/subsystems/bdev/bdev.o 01:14:59.585 LIB libspdk_event_bdev.a 01:14:59.585 SO libspdk_event_bdev.so.6.0 01:14:59.585 SYMLINK libspdk_event_bdev.so 01:14:59.848 CC module/event/subsystems/nbd/nbd.o 01:14:59.848 CC module/event/subsystems/ublk/ublk.o 01:14:59.848 CC module/event/subsystems/nvmf/nvmf_rpc.o 01:14:59.848 CC module/event/subsystems/nvmf/nvmf_tgt.o 01:14:59.848 CC module/event/subsystems/scsi/scsi.o 01:14:59.848 LIB libspdk_event_nbd.a 01:14:59.848 LIB libspdk_event_ublk.a 01:14:59.848 LIB libspdk_event_scsi.a 01:14:59.848 SO libspdk_event_nbd.so.6.0 01:14:59.848 SO libspdk_event_ublk.so.3.0 01:14:59.848 SO libspdk_event_scsi.so.6.0 01:14:59.848 SYMLINK libspdk_event_nbd.so 01:14:59.848 SYMLINK libspdk_event_ublk.so 01:14:59.848 LIB libspdk_event_nvmf.a 01:14:59.848 SYMLINK libspdk_event_scsi.so 01:15:00.110 SO libspdk_event_nvmf.so.6.0 01:15:00.110 SYMLINK libspdk_event_nvmf.so 01:15:00.110 CC module/event/subsystems/iscsi/iscsi.o 01:15:00.110 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 01:15:00.373 LIB libspdk_event_vhost_scsi.a 01:15:00.373 LIB libspdk_event_iscsi.a 01:15:00.373 SO libspdk_event_vhost_scsi.so.3.0 01:15:00.373 SO libspdk_event_iscsi.so.6.0 01:15:00.373 SYMLINK libspdk_event_vhost_scsi.so 01:15:00.373 SYMLINK libspdk_event_iscsi.so 01:15:00.373 SO libspdk.so.6.0 01:15:00.373 SYMLINK libspdk.so 01:15:00.636 TEST_HEADER include/spdk/accel.h 01:15:00.636 TEST_HEADER include/spdk/accel_module.h 01:15:00.636 TEST_HEADER include/spdk/assert.h 01:15:00.636 TEST_HEADER include/spdk/barrier.h 01:15:00.636 TEST_HEADER include/spdk/base64.h 01:15:00.636 CXX app/trace/trace.o 01:15:00.636 TEST_HEADER include/spdk/bdev.h 01:15:00.636 TEST_HEADER include/spdk/bdev_module.h 01:15:00.636 TEST_HEADER include/spdk/bdev_zone.h 01:15:00.636 TEST_HEADER include/spdk/bit_array.h 01:15:00.636 TEST_HEADER include/spdk/bit_pool.h 01:15:00.636 CC examples/interrupt_tgt/interrupt_tgt.o 01:15:00.636 TEST_HEADER include/spdk/blob_bdev.h 01:15:00.636 TEST_HEADER include/spdk/blobfs_bdev.h 01:15:00.636 TEST_HEADER include/spdk/blobfs.h 01:15:00.636 TEST_HEADER include/spdk/blob.h 01:15:00.636 TEST_HEADER include/spdk/conf.h 01:15:00.636 TEST_HEADER include/spdk/config.h 01:15:00.636 TEST_HEADER include/spdk/cpuset.h 01:15:00.636 TEST_HEADER include/spdk/crc16.h 01:15:00.636 TEST_HEADER include/spdk/crc32.h 01:15:00.636 TEST_HEADER include/spdk/crc64.h 01:15:00.636 TEST_HEADER include/spdk/dif.h 01:15:00.636 TEST_HEADER include/spdk/dma.h 01:15:00.636 TEST_HEADER include/spdk/endian.h 01:15:00.636 TEST_HEADER include/spdk/env_dpdk.h 01:15:00.636 TEST_HEADER include/spdk/env.h 01:15:00.636 TEST_HEADER include/spdk/event.h 01:15:00.636 TEST_HEADER include/spdk/fd_group.h 01:15:00.636 TEST_HEADER include/spdk/fd.h 01:15:00.636 TEST_HEADER include/spdk/file.h 01:15:00.636 TEST_HEADER include/spdk/fsdev.h 01:15:00.636 TEST_HEADER include/spdk/fsdev_module.h 01:15:00.636 CC examples/util/zipf/zipf.o 01:15:00.636 TEST_HEADER include/spdk/ftl.h 01:15:00.636 TEST_HEADER include/spdk/fuse_dispatcher.h 01:15:00.636 CC test/thread/poller_perf/poller_perf.o 01:15:00.636 TEST_HEADER include/spdk/gpt_spec.h 01:15:00.636 TEST_HEADER include/spdk/hexlify.h 01:15:00.636 CC examples/ioat/perf/perf.o 01:15:00.636 TEST_HEADER include/spdk/histogram_data.h 01:15:00.636 TEST_HEADER include/spdk/idxd.h 01:15:00.636 TEST_HEADER include/spdk/idxd_spec.h 01:15:00.636 TEST_HEADER include/spdk/init.h 01:15:00.636 TEST_HEADER include/spdk/ioat.h 01:15:00.636 TEST_HEADER include/spdk/ioat_spec.h 01:15:00.636 TEST_HEADER include/spdk/iscsi_spec.h 01:15:00.636 TEST_HEADER include/spdk/json.h 01:15:00.636 TEST_HEADER include/spdk/jsonrpc.h 01:15:00.636 TEST_HEADER include/spdk/keyring.h 01:15:00.636 TEST_HEADER include/spdk/keyring_module.h 01:15:00.636 TEST_HEADER include/spdk/likely.h 01:15:00.636 TEST_HEADER include/spdk/log.h 01:15:00.636 TEST_HEADER include/spdk/lvol.h 01:15:00.636 TEST_HEADER include/spdk/md5.h 01:15:00.636 TEST_HEADER include/spdk/memory.h 01:15:00.636 CC test/dma/test_dma/test_dma.o 01:15:00.636 TEST_HEADER include/spdk/mmio.h 01:15:00.636 TEST_HEADER include/spdk/nbd.h 01:15:00.636 TEST_HEADER include/spdk/net.h 01:15:00.636 TEST_HEADER include/spdk/notify.h 01:15:00.636 TEST_HEADER include/spdk/nvme.h 01:15:00.636 TEST_HEADER include/spdk/nvme_intel.h 01:15:00.636 TEST_HEADER include/spdk/nvme_ocssd.h 01:15:00.636 TEST_HEADER include/spdk/nvme_ocssd_spec.h 01:15:00.636 TEST_HEADER include/spdk/nvme_spec.h 01:15:00.636 TEST_HEADER include/spdk/nvme_zns.h 01:15:00.636 TEST_HEADER include/spdk/nvmf_cmd.h 01:15:00.636 TEST_HEADER include/spdk/nvmf_fc_spec.h 01:15:00.636 TEST_HEADER include/spdk/nvmf.h 01:15:00.636 CC test/app/bdev_svc/bdev_svc.o 01:15:00.636 CC test/env/mem_callbacks/mem_callbacks.o 01:15:00.636 TEST_HEADER include/spdk/nvmf_spec.h 01:15:00.636 TEST_HEADER include/spdk/nvmf_transport.h 01:15:00.899 TEST_HEADER include/spdk/opal.h 01:15:00.899 TEST_HEADER include/spdk/opal_spec.h 01:15:00.899 TEST_HEADER include/spdk/pci_ids.h 01:15:00.899 TEST_HEADER include/spdk/pipe.h 01:15:00.899 TEST_HEADER include/spdk/queue.h 01:15:00.899 TEST_HEADER include/spdk/reduce.h 01:15:00.899 TEST_HEADER include/spdk/rpc.h 01:15:00.899 TEST_HEADER include/spdk/scheduler.h 01:15:00.899 TEST_HEADER include/spdk/scsi.h 01:15:00.899 TEST_HEADER include/spdk/scsi_spec.h 01:15:00.899 TEST_HEADER include/spdk/sock.h 01:15:00.899 TEST_HEADER include/spdk/stdinc.h 01:15:00.899 TEST_HEADER include/spdk/string.h 01:15:00.899 TEST_HEADER include/spdk/thread.h 01:15:00.899 TEST_HEADER include/spdk/trace.h 01:15:00.899 TEST_HEADER include/spdk/trace_parser.h 01:15:00.899 TEST_HEADER include/spdk/tree.h 01:15:00.899 TEST_HEADER include/spdk/ublk.h 01:15:00.899 TEST_HEADER include/spdk/util.h 01:15:00.899 TEST_HEADER include/spdk/uuid.h 01:15:00.899 TEST_HEADER include/spdk/version.h 01:15:00.899 TEST_HEADER include/spdk/vfio_user_pci.h 01:15:00.899 TEST_HEADER include/spdk/vfio_user_spec.h 01:15:00.899 TEST_HEADER include/spdk/vhost.h 01:15:00.899 TEST_HEADER include/spdk/vmd.h 01:15:00.899 TEST_HEADER include/spdk/xor.h 01:15:00.899 TEST_HEADER include/spdk/zipf.h 01:15:00.899 CXX test/cpp_headers/accel.o 01:15:00.899 LINK zipf 01:15:00.899 LINK interrupt_tgt 01:15:00.899 LINK poller_perf 01:15:00.899 LINK ioat_perf 01:15:00.899 CXX test/cpp_headers/accel_module.o 01:15:00.899 CXX test/cpp_headers/assert.o 01:15:00.899 LINK bdev_svc 01:15:00.899 LINK spdk_trace 01:15:01.185 CC test/env/vtophys/vtophys.o 01:15:01.185 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 01:15:01.185 CXX test/cpp_headers/barrier.o 01:15:01.185 CC examples/ioat/verify/verify.o 01:15:01.185 CC test/env/memory/memory_ut.o 01:15:01.185 LINK vtophys 01:15:01.185 LINK test_dma 01:15:01.185 CC app/trace_record/trace_record.o 01:15:01.185 LINK env_dpdk_post_init 01:15:01.185 CXX test/cpp_headers/base64.o 01:15:01.185 LINK mem_callbacks 01:15:01.185 LINK verify 01:15:01.185 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 01:15:01.470 LINK spdk_trace_record 01:15:01.470 CXX test/cpp_headers/bdev.o 01:15:01.470 CC test/app/histogram_perf/histogram_perf.o 01:15:01.470 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 01:15:01.470 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 01:15:01.470 LINK histogram_perf 01:15:01.470 CC examples/thread/thread/thread_ex.o 01:15:01.470 CC examples/sock/hello_world/hello_sock.o 01:15:01.470 CXX test/cpp_headers/bdev_module.o 01:15:01.470 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 01:15:01.470 CC app/nvmf_tgt/nvmf_main.o 01:15:01.733 LINK nvme_fuzz 01:15:01.733 LINK hello_sock 01:15:01.733 CXX test/cpp_headers/bdev_zone.o 01:15:01.733 LINK thread 01:15:01.733 CC examples/vmd/lsvmd/lsvmd.o 01:15:01.733 LINK nvmf_tgt 01:15:01.733 CC test/rpc_client/rpc_client_test.o 01:15:01.996 CXX test/cpp_headers/bit_array.o 01:15:01.996 LINK lsvmd 01:15:01.996 LINK vhost_fuzz 01:15:01.996 LINK rpc_client_test 01:15:01.996 CC examples/vmd/led/led.o 01:15:01.996 CXX test/cpp_headers/bit_pool.o 01:15:01.996 CC test/accel/dif/dif.o 01:15:01.996 CC app/iscsi_tgt/iscsi_tgt.o 01:15:01.996 CC test/env/pci/pci_ut.o 01:15:01.996 LINK led 01:15:02.258 LINK memory_ut 01:15:02.258 CXX test/cpp_headers/blob_bdev.o 01:15:02.258 CC test/blobfs/mkfs/mkfs.o 01:15:02.258 CC examples/idxd/perf/perf.o 01:15:02.258 LINK iscsi_tgt 01:15:02.258 CXX test/cpp_headers/blobfs_bdev.o 01:15:02.258 CC test/event/event_perf/event_perf.o 01:15:02.520 LINK pci_ut 01:15:02.520 LINK mkfs 01:15:02.520 CXX test/cpp_headers/blobfs.o 01:15:02.520 LINK iscsi_fuzz 01:15:02.520 CC test/lvol/esnap/esnap.o 01:15:02.520 LINK event_perf 01:15:02.520 CC app/spdk_tgt/spdk_tgt.o 01:15:02.520 LINK idxd_perf 01:15:02.520 LINK dif 01:15:02.786 CXX test/cpp_headers/blob.o 01:15:02.786 CC test/app/jsoncat/jsoncat.o 01:15:02.786 CC test/event/reactor/reactor.o 01:15:02.786 CC app/spdk_lspci/spdk_lspci.o 01:15:02.786 CC test/nvme/aer/aer.o 01:15:02.786 CC app/spdk_nvme_perf/perf.o 01:15:02.786 CXX test/cpp_headers/conf.o 01:15:02.786 LINK spdk_tgt 01:15:02.786 LINK reactor 01:15:03.049 CC app/spdk_nvme_identify/identify.o 01:15:03.049 LINK spdk_lspci 01:15:03.049 LINK jsoncat 01:15:03.049 CXX test/cpp_headers/config.o 01:15:03.049 CXX test/cpp_headers/cpuset.o 01:15:03.049 CC test/event/reactor_perf/reactor_perf.o 01:15:03.049 LINK aer 01:15:03.049 CXX test/cpp_headers/crc16.o 01:15:03.049 CC test/app/stub/stub.o 01:15:03.049 CC test/nvme/reset/reset.o 01:15:03.308 LINK reactor_perf 01:15:03.308 CXX test/cpp_headers/crc32.o 01:15:03.308 CC test/bdev/bdevio/bdevio.o 01:15:03.308 CC test/nvme/sgl/sgl.o 01:15:03.308 LINK stub 01:15:03.308 LINK reset 01:15:03.308 CC test/event/app_repeat/app_repeat.o 01:15:03.569 CXX test/cpp_headers/crc64.o 01:15:03.569 CXX test/cpp_headers/dif.o 01:15:03.569 LINK spdk_nvme_perf 01:15:03.569 LINK sgl 01:15:03.569 CXX test/cpp_headers/dma.o 01:15:03.569 LINK app_repeat 01:15:03.569 LINK spdk_nvme_identify 01:15:03.837 CXX test/cpp_headers/endian.o 01:15:03.837 CC test/nvme/e2edp/nvme_dp.o 01:15:03.837 LINK bdevio 01:15:03.837 CC app/spdk_nvme_discover/discovery_aer.o 01:15:03.837 CC test/event/scheduler/scheduler.o 01:15:03.837 CXX test/cpp_headers/env_dpdk.o 01:15:03.837 CXX test/cpp_headers/env.o 01:15:04.106 CC examples/fsdev/hello_world/hello_fsdev.o 01:15:04.106 LINK spdk_nvme_discover 01:15:04.106 CC examples/accel/perf/accel_perf.o 01:15:04.106 LINK nvme_dp 01:15:04.106 CXX test/cpp_headers/event.o 01:15:04.106 CC test/nvme/overhead/overhead.o 01:15:04.106 CC test/nvme/err_injection/err_injection.o 01:15:04.368 CXX test/cpp_headers/fd_group.o 01:15:04.368 CXX test/cpp_headers/fd.o 01:15:04.368 LINK scheduler 01:15:04.368 CC app/spdk_top/spdk_top.o 01:15:04.368 LINK hello_fsdev 01:15:04.368 LINK err_injection 01:15:04.368 LINK overhead 01:15:04.368 CXX test/cpp_headers/file.o 01:15:04.629 CC test/nvme/startup/startup.o 01:15:04.629 CC test/nvme/simple_copy/simple_copy.o 01:15:04.630 CC test/nvme/reserve/reserve.o 01:15:04.630 CXX test/cpp_headers/fsdev.o 01:15:04.630 CC test/nvme/connect_stress/connect_stress.o 01:15:04.630 LINK accel_perf 01:15:04.630 LINK startup 01:15:04.630 CC test/nvme/boot_partition/boot_partition.o 01:15:04.630 CXX test/cpp_headers/fsdev_module.o 01:15:04.630 CXX test/cpp_headers/ftl.o 01:15:04.630 LINK reserve 01:15:04.890 LINK simple_copy 01:15:04.890 CXX test/cpp_headers/fuse_dispatcher.o 01:15:04.890 LINK connect_stress 01:15:04.890 LINK boot_partition 01:15:04.890 CXX test/cpp_headers/gpt_spec.o 01:15:04.890 CXX test/cpp_headers/hexlify.o 01:15:04.890 CC test/nvme/compliance/nvme_compliance.o 01:15:04.890 CC test/nvme/fused_ordering/fused_ordering.o 01:15:04.890 CC test/nvme/doorbell_aers/doorbell_aers.o 01:15:04.890 CC test/nvme/fdp/fdp.o 01:15:04.890 CXX test/cpp_headers/histogram_data.o 01:15:05.151 CC test/nvme/cuse/cuse.o 01:15:05.151 LINK spdk_top 01:15:05.151 CC app/vhost/vhost.o 01:15:05.151 CXX test/cpp_headers/idxd.o 01:15:05.151 LINK fused_ordering 01:15:05.151 LINK doorbell_aers 01:15:05.151 LINK nvme_compliance 01:15:05.151 LINK vhost 01:15:05.411 CXX test/cpp_headers/idxd_spec.o 01:15:05.411 LINK fdp 01:15:05.411 CC examples/blob/hello_world/hello_blob.o 01:15:05.411 CC examples/blob/cli/blobcli.o 01:15:05.411 CC app/spdk_dd/spdk_dd.o 01:15:05.411 CC app/fio/nvme/fio_plugin.o 01:15:05.411 CXX test/cpp_headers/init.o 01:15:05.411 LINK hello_blob 01:15:05.681 CXX test/cpp_headers/ioat.o 01:15:05.681 CC app/fio/bdev/fio_plugin.o 01:15:05.681 CC examples/nvme/hello_world/hello_world.o 01:15:05.681 LINK spdk_dd 01:15:05.681 LINK spdk_nvme 01:15:05.681 LINK blobcli 01:15:05.681 CC examples/nvme/reconnect/reconnect.o 01:15:05.681 CXX test/cpp_headers/ioat_spec.o 01:15:05.681 LINK hello_world 01:15:05.940 CXX test/cpp_headers/iscsi_spec.o 01:15:05.940 LINK spdk_bdev 01:15:05.940 CC examples/nvme/nvme_manage/nvme_manage.o 01:15:05.940 CC examples/nvme/arbitration/arbitration.o 01:15:05.940 CXX test/cpp_headers/json.o 01:15:05.940 CC examples/nvme/cmb_copy/cmb_copy.o 01:15:05.940 CC examples/nvme/abort/abort.o 01:15:06.200 LINK reconnect 01:15:06.200 CC examples/nvme/hotplug/hotplug.o 01:15:06.200 CXX test/cpp_headers/jsonrpc.o 01:15:06.200 LINK cmb_copy 01:15:06.200 LINK cuse 01:15:06.200 CC examples/bdev/hello_world/hello_bdev.o 01:15:06.200 CXX test/cpp_headers/keyring.o 01:15:06.200 CC examples/nvme/pmr_persistence/pmr_persistence.o 01:15:06.200 LINK arbitration 01:15:06.200 LINK hotplug 01:15:06.200 LINK nvme_manage 01:15:06.460 CXX test/cpp_headers/keyring_module.o 01:15:06.460 LINK abort 01:15:06.460 CXX test/cpp_headers/likely.o 01:15:06.460 CC examples/bdev/bdevperf/bdevperf.o 01:15:06.460 CXX test/cpp_headers/log.o 01:15:06.460 CXX test/cpp_headers/lvol.o 01:15:06.460 CXX test/cpp_headers/md5.o 01:15:06.460 LINK pmr_persistence 01:15:06.460 LINK hello_bdev 01:15:06.460 CXX test/cpp_headers/memory.o 01:15:06.460 CXX test/cpp_headers/mmio.o 01:15:06.460 CXX test/cpp_headers/nbd.o 01:15:06.460 CXX test/cpp_headers/net.o 01:15:06.460 CXX test/cpp_headers/notify.o 01:15:06.460 CXX test/cpp_headers/nvme.o 01:15:06.460 CXX test/cpp_headers/nvme_intel.o 01:15:06.460 CXX test/cpp_headers/nvme_ocssd.o 01:15:06.460 CXX test/cpp_headers/nvme_ocssd_spec.o 01:15:06.721 CXX test/cpp_headers/nvme_spec.o 01:15:06.721 CXX test/cpp_headers/nvme_zns.o 01:15:06.721 CXX test/cpp_headers/nvmf_cmd.o 01:15:06.721 CXX test/cpp_headers/nvmf_fc_spec.o 01:15:06.721 CXX test/cpp_headers/nvmf.o 01:15:06.721 CXX test/cpp_headers/nvmf_spec.o 01:15:06.721 CXX test/cpp_headers/nvmf_transport.o 01:15:06.721 CXX test/cpp_headers/opal.o 01:15:06.721 CXX test/cpp_headers/opal_spec.o 01:15:06.721 CXX test/cpp_headers/pci_ids.o 01:15:06.721 CXX test/cpp_headers/pipe.o 01:15:06.721 CXX test/cpp_headers/queue.o 01:15:06.721 CXX test/cpp_headers/reduce.o 01:15:06.721 CXX test/cpp_headers/rpc.o 01:15:06.721 CXX test/cpp_headers/scheduler.o 01:15:06.721 LINK bdevperf 01:15:06.981 CXX test/cpp_headers/scsi.o 01:15:06.981 CXX test/cpp_headers/scsi_spec.o 01:15:06.981 CXX test/cpp_headers/sock.o 01:15:06.981 CXX test/cpp_headers/stdinc.o 01:15:06.981 LINK esnap 01:15:06.981 CXX test/cpp_headers/string.o 01:15:06.982 CXX test/cpp_headers/thread.o 01:15:06.982 CXX test/cpp_headers/trace.o 01:15:06.982 CXX test/cpp_headers/trace_parser.o 01:15:06.982 CXX test/cpp_headers/tree.o 01:15:06.982 CXX test/cpp_headers/ublk.o 01:15:06.982 CXX test/cpp_headers/util.o 01:15:06.982 CXX test/cpp_headers/uuid.o 01:15:06.982 CXX test/cpp_headers/version.o 01:15:06.982 CXX test/cpp_headers/vfio_user_pci.o 01:15:06.982 CXX test/cpp_headers/vfio_user_spec.o 01:15:06.982 CXX test/cpp_headers/vhost.o 01:15:06.982 CXX test/cpp_headers/vmd.o 01:15:07.242 CXX test/cpp_headers/xor.o 01:15:07.242 CXX test/cpp_headers/zipf.o 01:15:07.242 CC examples/nvmf/nvmf/nvmf.o 01:15:07.242 LINK nvmf 01:15:07.502 01:15:07.502 real 1m0.030s 01:15:07.502 user 5m45.870s 01:15:07.502 sys 1m3.981s 01:15:07.502 05:09:58 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 01:15:07.502 ************************************ 01:15:07.502 END TEST make 01:15:07.502 ************************************ 01:15:07.502 05:09:58 make -- common/autotest_common.sh@10 -- $ set +x 01:15:07.502 05:09:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 01:15:07.502 05:09:59 -- pm/common@29 -- $ signal_monitor_resources TERM 01:15:07.502 05:09:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 01:15:07.502 05:09:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:15:07.502 05:09:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 01:15:07.502 05:09:59 -- pm/common@44 -- $ pid=5027 01:15:07.502 05:09:59 -- pm/common@50 -- $ kill -TERM 5027 01:15:07.502 05:09:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:15:07.502 05:09:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 01:15:07.502 05:09:59 -- pm/common@44 -- $ pid=5028 01:15:07.502 05:09:59 -- pm/common@50 -- $ kill -TERM 5028 01:15:07.502 05:09:59 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 01:15:07.502 05:09:59 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 01:15:07.502 05:09:59 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:15:07.502 05:09:59 -- common/autotest_common.sh@1693 -- # lcov --version 01:15:07.502 05:09:59 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:15:07.763 05:09:59 -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:15:07.763 05:09:59 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:07.763 05:09:59 -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:07.763 05:09:59 -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:07.763 05:09:59 -- scripts/common.sh@336 -- # IFS=.-: 01:15:07.763 05:09:59 -- scripts/common.sh@336 -- # read -ra ver1 01:15:07.763 05:09:59 -- scripts/common.sh@337 -- # IFS=.-: 01:15:07.763 05:09:59 -- scripts/common.sh@337 -- # read -ra ver2 01:15:07.763 05:09:59 -- scripts/common.sh@338 -- # local 'op=<' 01:15:07.763 05:09:59 -- scripts/common.sh@340 -- # ver1_l=2 01:15:07.763 05:09:59 -- scripts/common.sh@341 -- # ver2_l=1 01:15:07.763 05:09:59 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:07.763 05:09:59 -- scripts/common.sh@344 -- # case "$op" in 01:15:07.763 05:09:59 -- scripts/common.sh@345 -- # : 1 01:15:07.763 05:09:59 -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:07.763 05:09:59 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:07.763 05:09:59 -- scripts/common.sh@365 -- # decimal 1 01:15:07.763 05:09:59 -- scripts/common.sh@353 -- # local d=1 01:15:07.763 05:09:59 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:07.763 05:09:59 -- scripts/common.sh@355 -- # echo 1 01:15:07.763 05:09:59 -- scripts/common.sh@365 -- # ver1[v]=1 01:15:07.763 05:09:59 -- scripts/common.sh@366 -- # decimal 2 01:15:07.763 05:09:59 -- scripts/common.sh@353 -- # local d=2 01:15:07.763 05:09:59 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:07.763 05:09:59 -- scripts/common.sh@355 -- # echo 2 01:15:07.763 05:09:59 -- scripts/common.sh@366 -- # ver2[v]=2 01:15:07.763 05:09:59 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:07.763 05:09:59 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:07.763 05:09:59 -- scripts/common.sh@368 -- # return 0 01:15:07.763 05:09:59 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:07.763 05:09:59 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:15:07.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:07.763 --rc genhtml_branch_coverage=1 01:15:07.763 --rc genhtml_function_coverage=1 01:15:07.763 --rc genhtml_legend=1 01:15:07.763 --rc geninfo_all_blocks=1 01:15:07.764 --rc geninfo_unexecuted_blocks=1 01:15:07.764 01:15:07.764 ' 01:15:07.764 05:09:59 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:15:07.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:07.764 --rc genhtml_branch_coverage=1 01:15:07.764 --rc genhtml_function_coverage=1 01:15:07.764 --rc genhtml_legend=1 01:15:07.764 --rc geninfo_all_blocks=1 01:15:07.764 --rc geninfo_unexecuted_blocks=1 01:15:07.764 01:15:07.764 ' 01:15:07.764 05:09:59 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:15:07.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:07.764 --rc genhtml_branch_coverage=1 01:15:07.764 --rc genhtml_function_coverage=1 01:15:07.764 --rc genhtml_legend=1 01:15:07.764 --rc geninfo_all_blocks=1 01:15:07.764 --rc geninfo_unexecuted_blocks=1 01:15:07.764 01:15:07.764 ' 01:15:07.764 05:09:59 -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:15:07.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:07.764 --rc genhtml_branch_coverage=1 01:15:07.764 --rc genhtml_function_coverage=1 01:15:07.764 --rc genhtml_legend=1 01:15:07.764 --rc geninfo_all_blocks=1 01:15:07.764 --rc geninfo_unexecuted_blocks=1 01:15:07.764 01:15:07.764 ' 01:15:07.764 05:09:59 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:15:07.764 05:09:59 -- nvmf/common.sh@7 -- # uname -s 01:15:07.764 05:09:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:15:07.764 05:09:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:15:07.764 05:09:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:15:07.764 05:09:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:15:07.764 05:09:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:15:07.764 05:09:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:15:07.764 05:09:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:15:07.764 05:09:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:15:07.764 05:09:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:15:07.764 05:09:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:15:07.764 05:09:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:15:07.764 05:09:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:15:07.764 05:09:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:15:07.764 05:09:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:15:07.764 05:09:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:15:07.764 05:09:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:15:07.764 05:09:59 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:15:07.764 05:09:59 -- scripts/common.sh@15 -- # shopt -s extglob 01:15:07.764 05:09:59 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:15:07.764 05:09:59 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:15:07.764 05:09:59 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:15:07.764 05:09:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:07.764 05:09:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:07.764 05:09:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:07.764 05:09:59 -- paths/export.sh@5 -- # export PATH 01:15:07.764 05:09:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:07.764 05:09:59 -- nvmf/common.sh@51 -- # : 0 01:15:07.764 05:09:59 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:15:07.764 05:09:59 -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:15:07.764 05:09:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:15:07.764 05:09:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:15:07.764 05:09:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:15:07.764 05:09:59 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:15:07.764 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:15:07.764 05:09:59 -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:15:07.764 05:09:59 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:15:07.764 05:09:59 -- nvmf/common.sh@55 -- # have_pci_nics=0 01:15:07.764 05:09:59 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 01:15:07.764 05:09:59 -- spdk/autotest.sh@32 -- # uname -s 01:15:07.764 05:09:59 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 01:15:07.764 05:09:59 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 01:15:07.764 05:09:59 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 01:15:07.764 05:09:59 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 01:15:07.764 05:09:59 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 01:15:07.764 05:09:59 -- spdk/autotest.sh@44 -- # modprobe nbd 01:15:07.764 05:09:59 -- spdk/autotest.sh@46 -- # type -P udevadm 01:15:07.764 05:09:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 01:15:07.764 05:09:59 -- spdk/autotest.sh@48 -- # udevadm_pid=55445 01:15:07.764 05:09:59 -- spdk/autotest.sh@53 -- # start_monitor_resources 01:15:07.764 05:09:59 -- pm/common@17 -- # local monitor 01:15:07.764 05:09:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 01:15:07.764 05:09:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 01:15:07.764 05:09:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 01:15:07.764 05:09:59 -- pm/common@25 -- # sleep 1 01:15:07.764 05:09:59 -- pm/common@21 -- # date +%s 01:15:07.764 05:09:59 -- pm/common@21 -- # date +%s 01:15:07.764 05:09:59 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733720999 01:15:07.764 05:09:59 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733720999 01:15:07.764 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733720999_collect-cpu-load.pm.log 01:15:07.764 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733720999_collect-vmstat.pm.log 01:15:08.707 05:10:00 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 01:15:08.707 05:10:00 -- spdk/autotest.sh@57 -- # timing_enter autotest 01:15:08.707 05:10:00 -- common/autotest_common.sh@726 -- # xtrace_disable 01:15:08.707 05:10:00 -- common/autotest_common.sh@10 -- # set +x 01:15:08.707 05:10:00 -- spdk/autotest.sh@59 -- # create_test_list 01:15:08.707 05:10:00 -- common/autotest_common.sh@752 -- # xtrace_disable 01:15:08.707 05:10:00 -- common/autotest_common.sh@10 -- # set +x 01:15:08.707 05:10:00 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 01:15:08.707 05:10:00 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 01:15:08.707 05:10:00 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 01:15:08.707 05:10:00 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 01:15:08.707 05:10:00 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 01:15:08.707 05:10:00 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 01:15:08.707 05:10:00 -- common/autotest_common.sh@1457 -- # uname 01:15:08.707 05:10:00 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 01:15:08.707 05:10:00 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 01:15:08.707 05:10:00 -- common/autotest_common.sh@1477 -- # uname 01:15:08.707 05:10:00 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 01:15:08.707 05:10:00 -- spdk/autotest.sh@68 -- # [[ y == y ]] 01:15:08.707 05:10:00 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 01:15:08.964 lcov: LCOV version 1.15 01:15:08.964 05:10:00 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 01:15:23.857 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 01:15:23.857 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 01:15:36.157 05:10:27 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 01:15:36.157 05:10:27 -- common/autotest_common.sh@726 -- # xtrace_disable 01:15:36.157 05:10:27 -- common/autotest_common.sh@10 -- # set +x 01:15:36.415 05:10:27 -- spdk/autotest.sh@78 -- # rm -f 01:15:36.415 05:10:27 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:15:36.675 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:15:36.935 0000:00:11.0 (1b36 0010): Already using the nvme driver 01:15:36.935 0000:00:10.0 (1b36 0010): Already using the nvme driver 01:15:36.935 05:10:28 -- spdk/autotest.sh@83 -- # get_zoned_devs 01:15:36.935 05:10:28 -- common/autotest_common.sh@1657 -- # zoned_devs=() 01:15:36.935 05:10:28 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 01:15:36.935 05:10:28 -- common/autotest_common.sh@1658 -- # local nvme bdf 01:15:36.935 05:10:28 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:15:36.935 05:10:28 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 01:15:36.935 05:10:28 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:15:36.935 05:10:28 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:15:36.935 05:10:28 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:15:36.935 05:10:28 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:15:36.935 05:10:28 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 01:15:36.935 05:10:28 -- common/autotest_common.sh@1650 -- # local device=nvme0n2 01:15:36.935 05:10:28 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:15:36.935 05:10:28 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:15:36.935 05:10:28 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:15:36.935 05:10:28 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 01:15:36.935 05:10:28 -- common/autotest_common.sh@1650 -- # local device=nvme0n3 01:15:36.935 05:10:28 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:15:36.935 05:10:28 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:15:36.935 05:10:28 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:15:36.935 05:10:28 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 01:15:36.935 05:10:28 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 01:15:36.935 05:10:28 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:15:36.935 05:10:28 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:15:36.935 05:10:28 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 01:15:36.935 05:10:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:15:36.935 05:10:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:15:36.935 05:10:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 01:15:36.935 05:10:28 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 01:15:36.935 05:10:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 01:15:36.935 No valid GPT data, bailing 01:15:36.935 05:10:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:15:36.935 05:10:28 -- scripts/common.sh@394 -- # pt= 01:15:36.935 05:10:28 -- scripts/common.sh@395 -- # return 1 01:15:36.935 05:10:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 01:15:36.935 1+0 records in 01:15:36.935 1+0 records out 01:15:36.935 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00384085 s, 273 MB/s 01:15:36.935 05:10:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:15:36.935 05:10:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:15:36.935 05:10:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 01:15:36.935 05:10:28 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 01:15:36.935 05:10:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 01:15:36.935 No valid GPT data, bailing 01:15:36.935 05:10:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:15:36.935 05:10:28 -- scripts/common.sh@394 -- # pt= 01:15:36.935 05:10:28 -- scripts/common.sh@395 -- # return 1 01:15:36.935 05:10:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 01:15:36.935 1+0 records in 01:15:36.935 1+0 records out 01:15:36.935 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0030683 s, 342 MB/s 01:15:36.935 05:10:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:15:36.935 05:10:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:15:36.935 05:10:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 01:15:36.935 05:10:28 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 01:15:36.935 05:10:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 01:15:36.935 No valid GPT data, bailing 01:15:36.935 05:10:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:15:36.935 05:10:28 -- scripts/common.sh@394 -- # pt= 01:15:36.935 05:10:28 -- scripts/common.sh@395 -- # return 1 01:15:36.935 05:10:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 01:15:36.935 1+0 records in 01:15:36.935 1+0 records out 01:15:36.935 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00458104 s, 229 MB/s 01:15:36.935 05:10:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:15:36.935 05:10:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:15:36.935 05:10:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 01:15:36.935 05:10:28 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 01:15:36.935 05:10:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 01:15:36.935 No valid GPT data, bailing 01:15:37.195 05:10:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:15:37.195 05:10:28 -- scripts/common.sh@394 -- # pt= 01:15:37.195 05:10:28 -- scripts/common.sh@395 -- # return 1 01:15:37.195 05:10:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 01:15:37.195 1+0 records in 01:15:37.195 1+0 records out 01:15:37.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00389555 s, 269 MB/s 01:15:37.195 05:10:28 -- spdk/autotest.sh@105 -- # sync 01:15:37.455 05:10:28 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 01:15:37.455 05:10:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 01:15:37.455 05:10:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 01:15:38.860 05:10:30 -- spdk/autotest.sh@111 -- # uname -s 01:15:38.860 05:10:30 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 01:15:38.860 05:10:30 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 01:15:38.860 05:10:30 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 01:15:39.425 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:15:39.425 Hugepages 01:15:39.425 node hugesize free / total 01:15:39.425 node0 1048576kB 0 / 0 01:15:39.425 node0 2048kB 0 / 0 01:15:39.425 01:15:39.425 Type BDF Vendor Device NUMA Driver Device Block devices 01:15:39.425 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 01:15:39.683 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 01:15:39.683 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 01:15:39.683 05:10:31 -- spdk/autotest.sh@117 -- # uname -s 01:15:39.683 05:10:31 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 01:15:39.683 05:10:31 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 01:15:39.683 05:10:31 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:15:40.250 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:15:40.250 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:15:40.250 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:15:40.250 05:10:31 -- common/autotest_common.sh@1517 -- # sleep 1 01:15:41.637 05:10:32 -- common/autotest_common.sh@1518 -- # bdfs=() 01:15:41.637 05:10:32 -- common/autotest_common.sh@1518 -- # local bdfs 01:15:41.637 05:10:32 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 01:15:41.637 05:10:32 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 01:15:41.637 05:10:32 -- common/autotest_common.sh@1498 -- # bdfs=() 01:15:41.637 05:10:32 -- common/autotest_common.sh@1498 -- # local bdfs 01:15:41.637 05:10:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:15:41.637 05:10:32 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:15:41.637 05:10:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:15:41.637 05:10:32 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 01:15:41.637 05:10:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:15:41.637 05:10:32 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:15:41.637 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:15:41.637 Waiting for block devices as requested 01:15:41.637 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:15:41.895 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:15:41.895 05:10:33 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 01:15:41.895 05:10:33 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 01:15:41.895 05:10:33 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 01:15:41.895 05:10:33 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 01:15:41.895 05:10:33 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 01:15:41.895 05:10:33 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 01:15:41.895 05:10:33 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 01:15:41.895 05:10:33 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 01:15:41.895 05:10:33 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 01:15:41.895 05:10:33 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 01:15:41.895 05:10:33 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 01:15:41.895 05:10:33 -- common/autotest_common.sh@1531 -- # grep oacs 01:15:41.895 05:10:33 -- common/autotest_common.sh@1531 -- # cut -d: -f2 01:15:41.895 05:10:33 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 01:15:41.895 05:10:33 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 01:15:41.895 05:10:33 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 01:15:41.895 05:10:33 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 01:15:41.895 05:10:33 -- common/autotest_common.sh@1540 -- # cut -d: -f2 01:15:41.895 05:10:33 -- common/autotest_common.sh@1540 -- # grep unvmcap 01:15:41.895 05:10:33 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 01:15:41.895 05:10:33 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 01:15:41.895 05:10:33 -- common/autotest_common.sh@1543 -- # continue 01:15:41.895 05:10:33 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 01:15:41.895 05:10:33 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 01:15:41.895 05:10:33 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 01:15:41.895 05:10:33 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 01:15:41.895 05:10:33 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 01:15:41.896 05:10:33 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 01:15:41.896 05:10:33 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 01:15:41.896 05:10:33 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 01:15:41.896 05:10:33 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 01:15:41.896 05:10:33 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 01:15:41.896 05:10:33 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 01:15:41.896 05:10:33 -- common/autotest_common.sh@1531 -- # cut -d: -f2 01:15:41.896 05:10:33 -- common/autotest_common.sh@1531 -- # grep oacs 01:15:41.896 05:10:33 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 01:15:41.896 05:10:33 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 01:15:41.896 05:10:33 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 01:15:41.896 05:10:33 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 01:15:41.896 05:10:33 -- common/autotest_common.sh@1540 -- # grep unvmcap 01:15:41.896 05:10:33 -- common/autotest_common.sh@1540 -- # cut -d: -f2 01:15:41.896 05:10:33 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 01:15:41.896 05:10:33 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 01:15:41.896 05:10:33 -- common/autotest_common.sh@1543 -- # continue 01:15:41.896 05:10:33 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 01:15:41.896 05:10:33 -- common/autotest_common.sh@732 -- # xtrace_disable 01:15:41.896 05:10:33 -- common/autotest_common.sh@10 -- # set +x 01:15:41.896 05:10:33 -- spdk/autotest.sh@125 -- # timing_enter afterboot 01:15:41.896 05:10:33 -- common/autotest_common.sh@726 -- # xtrace_disable 01:15:41.896 05:10:33 -- common/autotest_common.sh@10 -- # set +x 01:15:41.896 05:10:33 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:15:42.461 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:15:42.461 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:15:42.461 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:15:42.461 05:10:34 -- spdk/autotest.sh@127 -- # timing_exit afterboot 01:15:42.461 05:10:34 -- common/autotest_common.sh@732 -- # xtrace_disable 01:15:42.461 05:10:34 -- common/autotest_common.sh@10 -- # set +x 01:15:42.719 05:10:34 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 01:15:42.719 05:10:34 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 01:15:42.719 05:10:34 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 01:15:42.719 05:10:34 -- common/autotest_common.sh@1563 -- # bdfs=() 01:15:42.719 05:10:34 -- common/autotest_common.sh@1563 -- # _bdfs=() 01:15:42.719 05:10:34 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 01:15:42.719 05:10:34 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 01:15:42.719 05:10:34 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 01:15:42.719 05:10:34 -- common/autotest_common.sh@1498 -- # bdfs=() 01:15:42.719 05:10:34 -- common/autotest_common.sh@1498 -- # local bdfs 01:15:42.719 05:10:34 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:15:42.719 05:10:34 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:15:42.719 05:10:34 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:15:42.719 05:10:34 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 01:15:42.719 05:10:34 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:15:42.719 05:10:34 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 01:15:42.719 05:10:34 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 01:15:42.719 05:10:34 -- common/autotest_common.sh@1566 -- # device=0x0010 01:15:42.719 05:10:34 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 01:15:42.719 05:10:34 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 01:15:42.719 05:10:34 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 01:15:42.719 05:10:34 -- common/autotest_common.sh@1566 -- # device=0x0010 01:15:42.719 05:10:34 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 01:15:42.719 05:10:34 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 01:15:42.719 05:10:34 -- common/autotest_common.sh@1572 -- # return 0 01:15:42.719 05:10:34 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 01:15:42.719 05:10:34 -- common/autotest_common.sh@1580 -- # return 0 01:15:42.719 05:10:34 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 01:15:42.719 05:10:34 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 01:15:42.719 05:10:34 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 01:15:42.719 05:10:34 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 01:15:42.719 05:10:34 -- spdk/autotest.sh@149 -- # timing_enter lib 01:15:42.720 05:10:34 -- common/autotest_common.sh@726 -- # xtrace_disable 01:15:42.720 05:10:34 -- common/autotest_common.sh@10 -- # set +x 01:15:42.720 05:10:34 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 01:15:42.720 05:10:34 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 01:15:42.720 05:10:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:42.720 05:10:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:42.720 05:10:34 -- common/autotest_common.sh@10 -- # set +x 01:15:42.720 ************************************ 01:15:42.720 START TEST env 01:15:42.720 ************************************ 01:15:42.720 05:10:34 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 01:15:42.720 * Looking for test storage... 01:15:42.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 01:15:42.720 05:10:34 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:15:42.720 05:10:34 env -- common/autotest_common.sh@1693 -- # lcov --version 01:15:42.720 05:10:34 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:15:42.720 05:10:34 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:15:42.720 05:10:34 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:42.720 05:10:34 env -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:42.720 05:10:34 env -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:42.720 05:10:34 env -- scripts/common.sh@336 -- # IFS=.-: 01:15:42.720 05:10:34 env -- scripts/common.sh@336 -- # read -ra ver1 01:15:42.720 05:10:34 env -- scripts/common.sh@337 -- # IFS=.-: 01:15:42.720 05:10:34 env -- scripts/common.sh@337 -- # read -ra ver2 01:15:42.720 05:10:34 env -- scripts/common.sh@338 -- # local 'op=<' 01:15:42.720 05:10:34 env -- scripts/common.sh@340 -- # ver1_l=2 01:15:42.720 05:10:34 env -- scripts/common.sh@341 -- # ver2_l=1 01:15:42.720 05:10:34 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:42.720 05:10:34 env -- scripts/common.sh@344 -- # case "$op" in 01:15:42.720 05:10:34 env -- scripts/common.sh@345 -- # : 1 01:15:42.720 05:10:34 env -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:42.720 05:10:34 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:42.720 05:10:34 env -- scripts/common.sh@365 -- # decimal 1 01:15:42.720 05:10:34 env -- scripts/common.sh@353 -- # local d=1 01:15:42.720 05:10:34 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:42.720 05:10:34 env -- scripts/common.sh@355 -- # echo 1 01:15:42.720 05:10:34 env -- scripts/common.sh@365 -- # ver1[v]=1 01:15:42.720 05:10:34 env -- scripts/common.sh@366 -- # decimal 2 01:15:42.720 05:10:34 env -- scripts/common.sh@353 -- # local d=2 01:15:42.720 05:10:34 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:42.720 05:10:34 env -- scripts/common.sh@355 -- # echo 2 01:15:42.720 05:10:34 env -- scripts/common.sh@366 -- # ver2[v]=2 01:15:42.720 05:10:34 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:42.720 05:10:34 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:42.720 05:10:34 env -- scripts/common.sh@368 -- # return 0 01:15:42.720 05:10:34 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:42.720 05:10:34 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:15:42.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:42.720 --rc genhtml_branch_coverage=1 01:15:42.720 --rc genhtml_function_coverage=1 01:15:42.720 --rc genhtml_legend=1 01:15:42.720 --rc geninfo_all_blocks=1 01:15:42.720 --rc geninfo_unexecuted_blocks=1 01:15:42.720 01:15:42.720 ' 01:15:42.720 05:10:34 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:15:42.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:42.720 --rc genhtml_branch_coverage=1 01:15:42.720 --rc genhtml_function_coverage=1 01:15:42.720 --rc genhtml_legend=1 01:15:42.720 --rc geninfo_all_blocks=1 01:15:42.720 --rc geninfo_unexecuted_blocks=1 01:15:42.720 01:15:42.720 ' 01:15:42.720 05:10:34 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:15:42.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:42.720 --rc genhtml_branch_coverage=1 01:15:42.720 --rc genhtml_function_coverage=1 01:15:42.720 --rc genhtml_legend=1 01:15:42.720 --rc geninfo_all_blocks=1 01:15:42.720 --rc geninfo_unexecuted_blocks=1 01:15:42.720 01:15:42.720 ' 01:15:42.720 05:10:34 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:15:42.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:42.720 --rc genhtml_branch_coverage=1 01:15:42.720 --rc genhtml_function_coverage=1 01:15:42.720 --rc genhtml_legend=1 01:15:42.720 --rc geninfo_all_blocks=1 01:15:42.720 --rc geninfo_unexecuted_blocks=1 01:15:42.720 01:15:42.720 ' 01:15:42.720 05:10:34 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 01:15:42.720 05:10:34 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:42.720 05:10:34 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:42.720 05:10:34 env -- common/autotest_common.sh@10 -- # set +x 01:15:42.720 ************************************ 01:15:42.720 START TEST env_memory 01:15:42.720 ************************************ 01:15:42.720 05:10:34 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 01:15:42.720 01:15:42.720 01:15:42.720 CUnit - A unit testing framework for C - Version 2.1-3 01:15:42.720 http://cunit.sourceforge.net/ 01:15:42.720 01:15:42.720 01:15:42.720 Suite: memory 01:15:42.978 Test: alloc and free memory map ...[2024-12-09 05:10:34.350843] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 284:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 01:15:42.978 passed 01:15:42.978 Test: mem map translation ...[2024-12-09 05:10:34.374183] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 01:15:42.978 [2024-12-09 05:10:34.374210] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 01:15:42.978 [2024-12-09 05:10:34.374251] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 01:15:42.978 [2024-12-09 05:10:34.374256] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 606:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 01:15:42.978 passed 01:15:42.978 Test: mem map registration ...[2024-12-09 05:10:34.424945] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 01:15:42.978 [2024-12-09 05:10:34.424966] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 01:15:42.978 passed 01:15:42.978 Test: mem map adjacent registrations ...passed 01:15:42.978 01:15:42.978 Run Summary: Type Total Ran Passed Failed Inactive 01:15:42.978 suites 1 1 n/a 0 0 01:15:42.978 tests 4 4 4 0 0 01:15:42.978 asserts 152 152 152 0 n/a 01:15:42.978 01:15:42.978 Elapsed time = 0.169 seconds 01:15:42.978 01:15:42.978 real 0m0.183s 01:15:42.978 user 0m0.168s 01:15:42.978 sys 0m0.012s 01:15:42.978 05:10:34 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:42.978 05:10:34 env.env_memory -- common/autotest_common.sh@10 -- # set +x 01:15:42.978 ************************************ 01:15:42.978 END TEST env_memory 01:15:42.978 ************************************ 01:15:42.978 05:10:34 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 01:15:42.978 05:10:34 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:42.978 05:10:34 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:42.978 05:10:34 env -- common/autotest_common.sh@10 -- # set +x 01:15:42.978 ************************************ 01:15:42.978 START TEST env_vtophys 01:15:42.978 ************************************ 01:15:42.978 05:10:34 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 01:15:42.978 EAL: lib.eal log level changed from notice to debug 01:15:42.978 EAL: Detected lcore 0 as core 0 on socket 0 01:15:42.978 EAL: Detected lcore 1 as core 0 on socket 0 01:15:42.978 EAL: Detected lcore 2 as core 0 on socket 0 01:15:42.978 EAL: Detected lcore 3 as core 0 on socket 0 01:15:42.978 EAL: Detected lcore 4 as core 0 on socket 0 01:15:42.978 EAL: Detected lcore 5 as core 0 on socket 0 01:15:42.978 EAL: Detected lcore 6 as core 0 on socket 0 01:15:42.978 EAL: Detected lcore 7 as core 0 on socket 0 01:15:42.978 EAL: Detected lcore 8 as core 0 on socket 0 01:15:42.978 EAL: Detected lcore 9 as core 0 on socket 0 01:15:42.978 EAL: Maximum logical cores by configuration: 128 01:15:42.978 EAL: Detected CPU lcores: 10 01:15:42.978 EAL: Detected NUMA nodes: 1 01:15:42.978 EAL: Checking presence of .so 'librte_eal.so.24.1' 01:15:42.978 EAL: Detected shared linkage of DPDK 01:15:42.978 EAL: No shared files mode enabled, IPC will be disabled 01:15:42.978 EAL: Selected IOVA mode 'PA' 01:15:42.978 EAL: Probing VFIO support... 01:15:42.978 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 01:15:42.978 EAL: VFIO modules not loaded, skipping VFIO support... 01:15:42.978 EAL: Ask a virtual area of 0x2e000 bytes 01:15:42.978 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 01:15:42.978 EAL: Setting up physically contiguous memory... 01:15:42.978 EAL: Setting maximum number of open files to 524288 01:15:42.978 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 01:15:42.978 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 01:15:42.978 EAL: Ask a virtual area of 0x61000 bytes 01:15:42.978 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 01:15:42.978 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:15:42.978 EAL: Ask a virtual area of 0x400000000 bytes 01:15:42.978 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 01:15:42.978 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 01:15:42.978 EAL: Ask a virtual area of 0x61000 bytes 01:15:42.978 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 01:15:42.978 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:15:42.978 EAL: Ask a virtual area of 0x400000000 bytes 01:15:42.978 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 01:15:42.978 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 01:15:42.978 EAL: Ask a virtual area of 0x61000 bytes 01:15:42.978 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 01:15:42.978 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:15:42.978 EAL: Ask a virtual area of 0x400000000 bytes 01:15:42.978 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 01:15:42.978 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 01:15:42.978 EAL: Ask a virtual area of 0x61000 bytes 01:15:42.978 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 01:15:42.978 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:15:42.978 EAL: Ask a virtual area of 0x400000000 bytes 01:15:42.978 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 01:15:42.978 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 01:15:42.978 EAL: Hugepages will be freed exactly as allocated. 01:15:42.978 EAL: No shared files mode enabled, IPC is disabled 01:15:42.978 EAL: No shared files mode enabled, IPC is disabled 01:15:43.236 EAL: TSC frequency is ~2600000 KHz 01:15:43.236 EAL: Main lcore 0 is ready (tid=7fa0948fca00;cpuset=[0]) 01:15:43.236 EAL: Trying to obtain current memory policy. 01:15:43.236 EAL: Setting policy MPOL_PREFERRED for socket 0 01:15:43.237 EAL: Restoring previous memory policy: 0 01:15:43.237 EAL: request: mp_malloc_sync 01:15:43.237 EAL: No shared files mode enabled, IPC is disabled 01:15:43.237 EAL: Heap on socket 0 was expanded by 2MB 01:15:43.237 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 01:15:43.237 EAL: No PCI address specified using 'addr=' in: bus=pci 01:15:43.237 EAL: Mem event callback 'spdk:(nil)' registered 01:15:43.237 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 01:15:43.237 01:15:43.237 01:15:43.237 CUnit - A unit testing framework for C - Version 2.1-3 01:15:43.237 http://cunit.sourceforge.net/ 01:15:43.237 01:15:43.237 01:15:43.237 Suite: components_suite 01:15:43.237 Test: vtophys_malloc_test ...passed 01:15:43.237 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 01:15:43.237 EAL: Setting policy MPOL_PREFERRED for socket 0 01:15:43.237 EAL: Restoring previous memory policy: 4 01:15:43.237 EAL: Calling mem event callback 'spdk:(nil)' 01:15:43.237 EAL: request: mp_malloc_sync 01:15:43.237 EAL: No shared files mode enabled, IPC is disabled 01:15:43.237 EAL: Heap on socket 0 was expanded by 4MB 01:15:43.237 EAL: Calling mem event callback 'spdk:(nil)' 01:15:43.237 EAL: request: mp_malloc_sync 01:15:43.237 EAL: No shared files mode enabled, IPC is disabled 01:15:43.237 EAL: Heap on socket 0 was shrunk by 4MB 01:15:43.237 EAL: Trying to obtain current memory policy. 01:15:43.237 EAL: Setting policy MPOL_PREFERRED for socket 0 01:15:43.237 EAL: Restoring previous memory policy: 4 01:15:43.237 EAL: Calling mem event callback 'spdk:(nil)' 01:15:43.237 EAL: request: mp_malloc_sync 01:15:43.237 EAL: No shared files mode enabled, IPC is disabled 01:15:43.237 EAL: Heap on socket 0 was expanded by 6MB 01:15:43.237 EAL: Calling mem event callback 'spdk:(nil)' 01:15:43.237 EAL: request: mp_malloc_sync 01:15:43.237 EAL: No shared files mode enabled, IPC is disabled 01:15:43.237 EAL: Heap on socket 0 was shrunk by 6MB 01:15:43.237 EAL: Trying to obtain current memory policy. 01:15:43.237 EAL: Setting policy MPOL_PREFERRED for socket 0 01:15:43.237 EAL: Restoring previous memory policy: 4 01:15:43.237 EAL: Calling mem event callback 'spdk:(nil)' 01:15:43.237 EAL: request: mp_malloc_sync 01:15:43.237 EAL: No shared files mode enabled, IPC is disabled 01:15:43.237 EAL: Heap on socket 0 was expanded by 10MB 01:15:43.237 EAL: Calling mem event callback 'spdk:(nil)' 01:15:43.237 EAL: request: mp_malloc_sync 01:15:43.237 EAL: No shared files mode enabled, IPC is disabled 01:15:43.237 EAL: Heap on socket 0 was shrunk by 10MB 01:15:43.237 EAL: Trying to obtain current memory policy. 01:15:43.237 EAL: Setting policy MPOL_PREFERRED for socket 0 01:15:43.237 EAL: Restoring previous memory policy: 4 01:15:43.237 EAL: Calling mem event callback 'spdk:(nil)' 01:15:43.237 EAL: request: mp_malloc_sync 01:15:43.237 EAL: No shared files mode enabled, IPC is disabled 01:15:43.237 EAL: Heap on socket 0 was expanded by 18MB 01:15:43.237 EAL: Calling mem event callback 'spdk:(nil)' 01:15:43.237 EAL: request: mp_malloc_sync 01:15:43.237 EAL: No shared files mode enabled, IPC is disabled 01:15:43.237 EAL: Heap on socket 0 was shrunk by 18MB 01:15:43.237 EAL: Trying to obtain current memory policy. 01:15:43.237 EAL: Setting policy MPOL_PREFERRED for socket 0 01:15:43.237 EAL: Restoring previous memory policy: 4 01:15:43.237 EAL: Calling mem event callback 'spdk:(nil)' 01:15:43.237 EAL: request: mp_malloc_sync 01:15:43.237 EAL: No shared files mode enabled, IPC is disabled 01:15:43.237 EAL: Heap on socket 0 was expanded by 34MB 01:15:43.237 EAL: Calling mem event callback 'spdk:(nil)' 01:15:43.237 EAL: request: mp_malloc_sync 01:15:43.237 EAL: No shared files mode enabled, IPC is disabled 01:15:43.237 EAL: Heap on socket 0 was shrunk by 34MB 01:15:43.237 EAL: Trying to obtain current memory policy. 01:15:43.237 EAL: Setting policy MPOL_PREFERRED for socket 0 01:15:43.237 EAL: Restoring previous memory policy: 4 01:15:43.237 EAL: Calling mem event callback 'spdk:(nil)' 01:15:43.237 EAL: request: mp_malloc_sync 01:15:43.237 EAL: No shared files mode enabled, IPC is disabled 01:15:43.237 EAL: Heap on socket 0 was expanded by 66MB 01:15:43.237 EAL: Calling mem event callback 'spdk:(nil)' 01:15:43.237 EAL: request: mp_malloc_sync 01:15:43.237 EAL: No shared files mode enabled, IPC is disabled 01:15:43.237 EAL: Heap on socket 0 was shrunk by 66MB 01:15:43.237 EAL: Trying to obtain current memory policy. 01:15:43.237 EAL: Setting policy MPOL_PREFERRED for socket 0 01:15:43.237 EAL: Restoring previous memory policy: 4 01:15:43.237 EAL: Calling mem event callback 'spdk:(nil)' 01:15:43.237 EAL: request: mp_malloc_sync 01:15:43.237 EAL: No shared files mode enabled, IPC is disabled 01:15:43.237 EAL: Heap on socket 0 was expanded by 130MB 01:15:43.237 EAL: Calling mem event callback 'spdk:(nil)' 01:15:43.237 EAL: request: mp_malloc_sync 01:15:43.237 EAL: No shared files mode enabled, IPC is disabled 01:15:43.237 EAL: Heap on socket 0 was shrunk by 130MB 01:15:43.237 EAL: Trying to obtain current memory policy. 01:15:43.237 EAL: Setting policy MPOL_PREFERRED for socket 0 01:15:43.237 EAL: Restoring previous memory policy: 4 01:15:43.237 EAL: Calling mem event callback 'spdk:(nil)' 01:15:43.237 EAL: request: mp_malloc_sync 01:15:43.237 EAL: No shared files mode enabled, IPC is disabled 01:15:43.237 EAL: Heap on socket 0 was expanded by 258MB 01:15:43.237 EAL: Calling mem event callback 'spdk:(nil)' 01:15:43.237 EAL: request: mp_malloc_sync 01:15:43.237 EAL: No shared files mode enabled, IPC is disabled 01:15:43.237 EAL: Heap on socket 0 was shrunk by 258MB 01:15:43.237 EAL: Trying to obtain current memory policy. 01:15:43.237 EAL: Setting policy MPOL_PREFERRED for socket 0 01:15:43.495 EAL: Restoring previous memory policy: 4 01:15:43.495 EAL: Calling mem event callback 'spdk:(nil)' 01:15:43.495 EAL: request: mp_malloc_sync 01:15:43.495 EAL: No shared files mode enabled, IPC is disabled 01:15:43.495 EAL: Heap on socket 0 was expanded by 514MB 01:15:43.495 EAL: Calling mem event callback 'spdk:(nil)' 01:15:43.495 EAL: request: mp_malloc_sync 01:15:43.495 EAL: No shared files mode enabled, IPC is disabled 01:15:43.495 EAL: Heap on socket 0 was shrunk by 514MB 01:15:43.495 EAL: Trying to obtain current memory policy. 01:15:43.495 EAL: Setting policy MPOL_PREFERRED for socket 0 01:15:43.752 EAL: Restoring previous memory policy: 4 01:15:43.752 EAL: Calling mem event callback 'spdk:(nil)' 01:15:43.752 EAL: request: mp_malloc_sync 01:15:43.752 EAL: No shared files mode enabled, IPC is disabled 01:15:43.752 EAL: Heap on socket 0 was expanded by 1026MB 01:15:43.752 EAL: Calling mem event callback 'spdk:(nil)' 01:15:43.752 passed 01:15:43.752 01:15:43.752 Run Summary: Type Total Ran Passed Failed Inactive 01:15:43.752 suites 1 1 n/a 0 0 01:15:43.752 tests 2 2 2 0 0 01:15:43.752 asserts 5386 5386 5386 0 n/a 01:15:43.752 01:15:43.752 Elapsed time = 0.618 seconds 01:15:43.752 EAL: request: mp_malloc_sync 01:15:43.752 EAL: No shared files mode enabled, IPC is disabled 01:15:43.752 EAL: Heap on socket 0 was shrunk by 1026MB 01:15:43.752 EAL: Calling mem event callback 'spdk:(nil)' 01:15:43.752 EAL: request: mp_malloc_sync 01:15:43.752 EAL: No shared files mode enabled, IPC is disabled 01:15:43.752 EAL: Heap on socket 0 was shrunk by 2MB 01:15:43.752 EAL: No shared files mode enabled, IPC is disabled 01:15:43.752 EAL: No shared files mode enabled, IPC is disabled 01:15:43.752 EAL: No shared files mode enabled, IPC is disabled 01:15:43.752 01:15:43.752 real 0m0.809s 01:15:43.752 user 0m0.391s 01:15:43.752 sys 0m0.284s 01:15:43.752 05:10:35 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:43.752 05:10:35 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 01:15:43.752 ************************************ 01:15:43.752 END TEST env_vtophys 01:15:43.752 ************************************ 01:15:44.009 05:10:35 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 01:15:44.009 05:10:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:44.009 05:10:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:44.009 05:10:35 env -- common/autotest_common.sh@10 -- # set +x 01:15:44.009 ************************************ 01:15:44.009 START TEST env_pci 01:15:44.009 ************************************ 01:15:44.009 05:10:35 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 01:15:44.009 01:15:44.009 01:15:44.009 CUnit - A unit testing framework for C - Version 2.1-3 01:15:44.009 http://cunit.sourceforge.net/ 01:15:44.009 01:15:44.009 01:15:44.009 Suite: pci 01:15:44.009 Test: pci_hook ...[2024-12-09 05:10:35.390879] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57633 has claimed it 01:15:44.009 passed 01:15:44.009 01:15:44.009 Run Summary: Type Total Ran Passed Failed Inactive 01:15:44.009 suites 1 1 n/a 0 0 01:15:44.009 tests 1 1 1 0 0 01:15:44.009 asserts 25 25 25 0 n/a 01:15:44.009 01:15:44.009 Elapsed time = 0.001 seconds 01:15:44.009 EAL: Cannot find device (10000:00:01.0) 01:15:44.009 EAL: Failed to attach device on primary process 01:15:44.009 01:15:44.009 real 0m0.014s 01:15:44.009 user 0m0.007s 01:15:44.009 sys 0m0.006s 01:15:44.009 05:10:35 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:44.009 ************************************ 01:15:44.009 END TEST env_pci 01:15:44.009 ************************************ 01:15:44.009 05:10:35 env.env_pci -- common/autotest_common.sh@10 -- # set +x 01:15:44.009 05:10:35 env -- env/env.sh@14 -- # argv='-c 0x1 ' 01:15:44.009 05:10:35 env -- env/env.sh@15 -- # uname 01:15:44.009 05:10:35 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 01:15:44.009 05:10:35 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 01:15:44.009 05:10:35 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 01:15:44.009 05:10:35 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:15:44.010 05:10:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:44.010 05:10:35 env -- common/autotest_common.sh@10 -- # set +x 01:15:44.010 ************************************ 01:15:44.010 START TEST env_dpdk_post_init 01:15:44.010 ************************************ 01:15:44.010 05:10:35 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 01:15:44.010 EAL: Detected CPU lcores: 10 01:15:44.010 EAL: Detected NUMA nodes: 1 01:15:44.010 EAL: Detected shared linkage of DPDK 01:15:44.010 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 01:15:44.010 EAL: Selected IOVA mode 'PA' 01:15:44.010 TELEMETRY: No legacy callbacks, legacy socket not created 01:15:44.010 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 01:15:44.010 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 01:15:44.010 Starting DPDK initialization... 01:15:44.010 Starting SPDK post initialization... 01:15:44.010 SPDK NVMe probe 01:15:44.010 Attaching to 0000:00:10.0 01:15:44.010 Attaching to 0000:00:11.0 01:15:44.010 Attached to 0000:00:10.0 01:15:44.010 Attached to 0000:00:11.0 01:15:44.010 Cleaning up... 01:15:44.010 01:15:44.010 real 0m0.169s 01:15:44.010 user 0m0.046s 01:15:44.010 sys 0m0.022s 01:15:44.010 05:10:35 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:44.010 05:10:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 01:15:44.010 ************************************ 01:15:44.010 END TEST env_dpdk_post_init 01:15:44.010 ************************************ 01:15:44.268 05:10:35 env -- env/env.sh@26 -- # uname 01:15:44.268 05:10:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 01:15:44.268 05:10:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 01:15:44.268 05:10:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:44.268 05:10:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:44.268 05:10:35 env -- common/autotest_common.sh@10 -- # set +x 01:15:44.268 ************************************ 01:15:44.268 START TEST env_mem_callbacks 01:15:44.268 ************************************ 01:15:44.268 05:10:35 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 01:15:44.268 EAL: Detected CPU lcores: 10 01:15:44.268 EAL: Detected NUMA nodes: 1 01:15:44.268 EAL: Detected shared linkage of DPDK 01:15:44.268 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 01:15:44.268 EAL: Selected IOVA mode 'PA' 01:15:44.268 01:15:44.268 01:15:44.268 CUnit - A unit testing framework for C - Version 2.1-3 01:15:44.268 http://cunit.sourceforge.net/ 01:15:44.268 01:15:44.268 01:15:44.268 Suite: memory 01:15:44.268 Test: test ... 01:15:44.268 register 0x200000200000 2097152 01:15:44.268 malloc 3145728 01:15:44.268 TELEMETRY: No legacy callbacks, legacy socket not created 01:15:44.268 register 0x200000400000 4194304 01:15:44.268 buf 0x200000500000 len 3145728 PASSED 01:15:44.268 malloc 64 01:15:44.268 buf 0x2000004fff40 len 64 PASSED 01:15:44.268 malloc 4194304 01:15:44.268 register 0x200000800000 6291456 01:15:44.268 buf 0x200000a00000 len 4194304 PASSED 01:15:44.268 free 0x200000500000 3145728 01:15:44.268 free 0x2000004fff40 64 01:15:44.268 unregister 0x200000400000 4194304 PASSED 01:15:44.268 free 0x200000a00000 4194304 01:15:44.268 unregister 0x200000800000 6291456 PASSED 01:15:44.268 malloc 8388608 01:15:44.268 register 0x200000400000 10485760 01:15:44.268 buf 0x200000600000 len 8388608 PASSED 01:15:44.268 free 0x200000600000 8388608 01:15:44.268 unregister 0x200000400000 10485760 PASSED 01:15:44.268 passed 01:15:44.268 01:15:44.268 Run Summary: Type Total Ran Passed Failed Inactive 01:15:44.268 suites 1 1 n/a 0 0 01:15:44.268 tests 1 1 1 0 0 01:15:44.268 asserts 15 15 15 0 n/a 01:15:44.268 01:15:44.268 Elapsed time = 0.008 seconds 01:15:44.268 01:15:44.268 real 0m0.135s 01:15:44.268 user 0m0.012s 01:15:44.268 sys 0m0.022s 01:15:44.268 05:10:35 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:44.268 05:10:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 01:15:44.268 ************************************ 01:15:44.268 END TEST env_mem_callbacks 01:15:44.268 ************************************ 01:15:44.268 01:15:44.268 real 0m1.653s 01:15:44.268 user 0m0.771s 01:15:44.268 sys 0m0.548s 01:15:44.268 05:10:35 env -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:44.268 ************************************ 01:15:44.268 END TEST env 01:15:44.268 ************************************ 01:15:44.268 05:10:35 env -- common/autotest_common.sh@10 -- # set +x 01:15:44.268 05:10:35 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 01:15:44.268 05:10:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:44.268 05:10:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:44.268 05:10:35 -- common/autotest_common.sh@10 -- # set +x 01:15:44.268 ************************************ 01:15:44.268 START TEST rpc 01:15:44.268 ************************************ 01:15:44.268 05:10:35 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 01:15:44.527 * Looking for test storage... 01:15:44.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 01:15:44.527 05:10:35 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:15:44.527 05:10:35 rpc -- common/autotest_common.sh@1693 -- # lcov --version 01:15:44.527 05:10:35 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:15:44.527 05:10:35 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:15:44.527 05:10:35 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:44.527 05:10:35 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:44.527 05:10:35 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:44.527 05:10:35 rpc -- scripts/common.sh@336 -- # IFS=.-: 01:15:44.527 05:10:35 rpc -- scripts/common.sh@336 -- # read -ra ver1 01:15:44.527 05:10:35 rpc -- scripts/common.sh@337 -- # IFS=.-: 01:15:44.527 05:10:35 rpc -- scripts/common.sh@337 -- # read -ra ver2 01:15:44.527 05:10:35 rpc -- scripts/common.sh@338 -- # local 'op=<' 01:15:44.527 05:10:35 rpc -- scripts/common.sh@340 -- # ver1_l=2 01:15:44.527 05:10:35 rpc -- scripts/common.sh@341 -- # ver2_l=1 01:15:44.527 05:10:35 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:44.527 05:10:35 rpc -- scripts/common.sh@344 -- # case "$op" in 01:15:44.527 05:10:35 rpc -- scripts/common.sh@345 -- # : 1 01:15:44.527 05:10:35 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:44.527 05:10:35 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:44.527 05:10:35 rpc -- scripts/common.sh@365 -- # decimal 1 01:15:44.527 05:10:35 rpc -- scripts/common.sh@353 -- # local d=1 01:15:44.527 05:10:35 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:44.527 05:10:35 rpc -- scripts/common.sh@355 -- # echo 1 01:15:44.527 05:10:35 rpc -- scripts/common.sh@365 -- # ver1[v]=1 01:15:44.527 05:10:35 rpc -- scripts/common.sh@366 -- # decimal 2 01:15:44.527 05:10:35 rpc -- scripts/common.sh@353 -- # local d=2 01:15:44.527 05:10:35 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:44.527 05:10:35 rpc -- scripts/common.sh@355 -- # echo 2 01:15:44.527 05:10:35 rpc -- scripts/common.sh@366 -- # ver2[v]=2 01:15:44.527 05:10:35 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:44.527 05:10:35 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:44.527 05:10:35 rpc -- scripts/common.sh@368 -- # return 0 01:15:44.527 05:10:35 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:44.527 05:10:35 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:15:44.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:44.527 --rc genhtml_branch_coverage=1 01:15:44.527 --rc genhtml_function_coverage=1 01:15:44.527 --rc genhtml_legend=1 01:15:44.527 --rc geninfo_all_blocks=1 01:15:44.527 --rc geninfo_unexecuted_blocks=1 01:15:44.527 01:15:44.527 ' 01:15:44.527 05:10:35 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:15:44.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:44.527 --rc genhtml_branch_coverage=1 01:15:44.527 --rc genhtml_function_coverage=1 01:15:44.527 --rc genhtml_legend=1 01:15:44.527 --rc geninfo_all_blocks=1 01:15:44.527 --rc geninfo_unexecuted_blocks=1 01:15:44.527 01:15:44.527 ' 01:15:44.527 05:10:35 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:15:44.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:44.527 --rc genhtml_branch_coverage=1 01:15:44.527 --rc genhtml_function_coverage=1 01:15:44.527 --rc genhtml_legend=1 01:15:44.527 --rc geninfo_all_blocks=1 01:15:44.527 --rc geninfo_unexecuted_blocks=1 01:15:44.527 01:15:44.527 ' 01:15:44.527 05:10:35 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:15:44.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:44.527 --rc genhtml_branch_coverage=1 01:15:44.527 --rc genhtml_function_coverage=1 01:15:44.527 --rc genhtml_legend=1 01:15:44.527 --rc geninfo_all_blocks=1 01:15:44.527 --rc geninfo_unexecuted_blocks=1 01:15:44.527 01:15:44.527 ' 01:15:44.527 05:10:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57751 01:15:44.527 05:10:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:15:44.527 05:10:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57751 01:15:44.527 05:10:35 rpc -- common/autotest_common.sh@835 -- # '[' -z 57751 ']' 01:15:44.527 05:10:35 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:44.527 05:10:35 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:44.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:44.527 05:10:35 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:44.527 05:10:35 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 01:15:44.527 05:10:35 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:44.527 05:10:35 rpc -- common/autotest_common.sh@10 -- # set +x 01:15:44.527 [2024-12-09 05:10:36.025156] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:15:44.527 [2024-12-09 05:10:36.025225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57751 ] 01:15:44.785 [2024-12-09 05:10:36.156558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:44.785 [2024-12-09 05:10:36.191242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 01:15:44.785 [2024-12-09 05:10:36.191281] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57751' to capture a snapshot of events at runtime. 01:15:44.786 [2024-12-09 05:10:36.191287] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:15:44.786 [2024-12-09 05:10:36.191292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:15:44.786 [2024-12-09 05:10:36.191297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57751 for offline analysis/debug. 01:15:44.786 [2024-12-09 05:10:36.191545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:45.353 05:10:36 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:45.353 05:10:36 rpc -- common/autotest_common.sh@868 -- # return 0 01:15:45.353 05:10:36 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 01:15:45.353 05:10:36 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 01:15:45.353 05:10:36 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 01:15:45.353 05:10:36 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 01:15:45.353 05:10:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:45.353 05:10:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:45.353 05:10:36 rpc -- common/autotest_common.sh@10 -- # set +x 01:15:45.353 ************************************ 01:15:45.353 START TEST rpc_integrity 01:15:45.353 ************************************ 01:15:45.353 05:10:36 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 01:15:45.353 05:10:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 01:15:45.353 05:10:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:45.353 05:10:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:45.353 05:10:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:45.353 05:10:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 01:15:45.353 05:10:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 01:15:45.353 05:10:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 01:15:45.353 05:10:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 01:15:45.353 05:10:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:45.353 05:10:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:45.353 05:10:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:45.353 05:10:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 01:15:45.353 05:10:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 01:15:45.353 05:10:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:45.353 05:10:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:45.353 05:10:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:45.353 05:10:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 01:15:45.353 { 01:15:45.353 "aliases": [ 01:15:45.353 "0e202a6b-842b-4795-9253-8c21081dbce6" 01:15:45.353 ], 01:15:45.353 "assigned_rate_limits": { 01:15:45.353 "r_mbytes_per_sec": 0, 01:15:45.353 "rw_ios_per_sec": 0, 01:15:45.353 "rw_mbytes_per_sec": 0, 01:15:45.353 "w_mbytes_per_sec": 0 01:15:45.353 }, 01:15:45.353 "block_size": 512, 01:15:45.353 "claimed": false, 01:15:45.353 "driver_specific": {}, 01:15:45.353 "memory_domains": [ 01:15:45.353 { 01:15:45.353 "dma_device_id": "system", 01:15:45.353 "dma_device_type": 1 01:15:45.353 }, 01:15:45.353 { 01:15:45.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:15:45.353 "dma_device_type": 2 01:15:45.353 } 01:15:45.353 ], 01:15:45.353 "name": "Malloc0", 01:15:45.353 "num_blocks": 16384, 01:15:45.353 "product_name": "Malloc disk", 01:15:45.353 "supported_io_types": { 01:15:45.353 "abort": true, 01:15:45.353 "compare": false, 01:15:45.353 "compare_and_write": false, 01:15:45.353 "copy": true, 01:15:45.353 "flush": true, 01:15:45.353 "get_zone_info": false, 01:15:45.353 "nvme_admin": false, 01:15:45.353 "nvme_io": false, 01:15:45.353 "nvme_io_md": false, 01:15:45.353 "nvme_iov_md": false, 01:15:45.353 "read": true, 01:15:45.353 "reset": true, 01:15:45.353 "seek_data": false, 01:15:45.353 "seek_hole": false, 01:15:45.353 "unmap": true, 01:15:45.353 "write": true, 01:15:45.353 "write_zeroes": true, 01:15:45.353 "zcopy": true, 01:15:45.353 "zone_append": false, 01:15:45.353 "zone_management": false 01:15:45.353 }, 01:15:45.353 "uuid": "0e202a6b-842b-4795-9253-8c21081dbce6", 01:15:45.353 "zoned": false 01:15:45.353 } 01:15:45.353 ]' 01:15:45.353 05:10:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 01:15:45.612 05:10:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 01:15:45.612 05:10:36 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 01:15:45.612 05:10:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:45.612 05:10:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:45.612 [2024-12-09 05:10:37.000894] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 01:15:45.612 [2024-12-09 05:10:37.000930] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:15:45.612 [2024-12-09 05:10:37.000941] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1cc5c30 01:15:45.612 [2024-12-09 05:10:37.000947] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:15:45.612 [2024-12-09 05:10:37.002247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:15:45.612 [2024-12-09 05:10:37.002275] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 01:15:45.612 Passthru0 01:15:45.612 05:10:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:45.612 05:10:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 01:15:45.612 05:10:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:45.612 05:10:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:45.612 05:10:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:45.612 05:10:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 01:15:45.612 { 01:15:45.612 "aliases": [ 01:15:45.612 "0e202a6b-842b-4795-9253-8c21081dbce6" 01:15:45.612 ], 01:15:45.612 "assigned_rate_limits": { 01:15:45.612 "r_mbytes_per_sec": 0, 01:15:45.612 "rw_ios_per_sec": 0, 01:15:45.612 "rw_mbytes_per_sec": 0, 01:15:45.612 "w_mbytes_per_sec": 0 01:15:45.612 }, 01:15:45.612 "block_size": 512, 01:15:45.612 "claim_type": "exclusive_write", 01:15:45.612 "claimed": true, 01:15:45.612 "driver_specific": {}, 01:15:45.612 "memory_domains": [ 01:15:45.612 { 01:15:45.612 "dma_device_id": "system", 01:15:45.612 "dma_device_type": 1 01:15:45.612 }, 01:15:45.612 { 01:15:45.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:15:45.612 "dma_device_type": 2 01:15:45.612 } 01:15:45.612 ], 01:15:45.612 "name": "Malloc0", 01:15:45.612 "num_blocks": 16384, 01:15:45.612 "product_name": "Malloc disk", 01:15:45.612 "supported_io_types": { 01:15:45.612 "abort": true, 01:15:45.612 "compare": false, 01:15:45.612 "compare_and_write": false, 01:15:45.612 "copy": true, 01:15:45.612 "flush": true, 01:15:45.612 "get_zone_info": false, 01:15:45.612 "nvme_admin": false, 01:15:45.612 "nvme_io": false, 01:15:45.612 "nvme_io_md": false, 01:15:45.612 "nvme_iov_md": false, 01:15:45.612 "read": true, 01:15:45.612 "reset": true, 01:15:45.612 "seek_data": false, 01:15:45.612 "seek_hole": false, 01:15:45.612 "unmap": true, 01:15:45.612 "write": true, 01:15:45.612 "write_zeroes": true, 01:15:45.612 "zcopy": true, 01:15:45.612 "zone_append": false, 01:15:45.612 "zone_management": false 01:15:45.612 }, 01:15:45.612 "uuid": "0e202a6b-842b-4795-9253-8c21081dbce6", 01:15:45.612 "zoned": false 01:15:45.612 }, 01:15:45.612 { 01:15:45.612 "aliases": [ 01:15:45.612 "1286eec7-ccee-5bc4-bc8e-6c5d011f795a" 01:15:45.612 ], 01:15:45.612 "assigned_rate_limits": { 01:15:45.612 "r_mbytes_per_sec": 0, 01:15:45.612 "rw_ios_per_sec": 0, 01:15:45.612 "rw_mbytes_per_sec": 0, 01:15:45.612 "w_mbytes_per_sec": 0 01:15:45.612 }, 01:15:45.612 "block_size": 512, 01:15:45.612 "claimed": false, 01:15:45.612 "driver_specific": { 01:15:45.612 "passthru": { 01:15:45.612 "base_bdev_name": "Malloc0", 01:15:45.612 "name": "Passthru0" 01:15:45.612 } 01:15:45.612 }, 01:15:45.612 "memory_domains": [ 01:15:45.612 { 01:15:45.612 "dma_device_id": "system", 01:15:45.612 "dma_device_type": 1 01:15:45.612 }, 01:15:45.612 { 01:15:45.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:15:45.612 "dma_device_type": 2 01:15:45.612 } 01:15:45.612 ], 01:15:45.612 "name": "Passthru0", 01:15:45.612 "num_blocks": 16384, 01:15:45.612 "product_name": "passthru", 01:15:45.612 "supported_io_types": { 01:15:45.612 "abort": true, 01:15:45.612 "compare": false, 01:15:45.612 "compare_and_write": false, 01:15:45.612 "copy": true, 01:15:45.612 "flush": true, 01:15:45.612 "get_zone_info": false, 01:15:45.612 "nvme_admin": false, 01:15:45.612 "nvme_io": false, 01:15:45.612 "nvme_io_md": false, 01:15:45.612 "nvme_iov_md": false, 01:15:45.612 "read": true, 01:15:45.612 "reset": true, 01:15:45.612 "seek_data": false, 01:15:45.612 "seek_hole": false, 01:15:45.612 "unmap": true, 01:15:45.612 "write": true, 01:15:45.612 "write_zeroes": true, 01:15:45.612 "zcopy": true, 01:15:45.612 "zone_append": false, 01:15:45.612 "zone_management": false 01:15:45.612 }, 01:15:45.612 "uuid": "1286eec7-ccee-5bc4-bc8e-6c5d011f795a", 01:15:45.612 "zoned": false 01:15:45.612 } 01:15:45.612 ]' 01:15:45.612 05:10:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 01:15:45.612 05:10:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 01:15:45.612 05:10:37 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 01:15:45.612 05:10:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:45.612 05:10:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:45.612 05:10:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:45.612 05:10:37 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 01:15:45.612 05:10:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:45.612 05:10:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:45.612 05:10:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:45.612 05:10:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 01:15:45.612 05:10:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:45.612 05:10:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:45.612 05:10:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:45.613 05:10:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 01:15:45.613 05:10:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 01:15:45.613 05:10:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 01:15:45.613 01:15:45.613 real 0m0.208s 01:15:45.613 user 0m0.122s 01:15:45.613 sys 0m0.027s 01:15:45.613 05:10:37 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:45.613 05:10:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:45.613 ************************************ 01:15:45.613 END TEST rpc_integrity 01:15:45.613 ************************************ 01:15:45.613 05:10:37 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 01:15:45.613 05:10:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:45.613 05:10:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:45.613 05:10:37 rpc -- common/autotest_common.sh@10 -- # set +x 01:15:45.613 ************************************ 01:15:45.613 START TEST rpc_plugins 01:15:45.613 ************************************ 01:15:45.613 05:10:37 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 01:15:45.613 05:10:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 01:15:45.613 05:10:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:45.613 05:10:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:15:45.613 05:10:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:45.613 05:10:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 01:15:45.613 05:10:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 01:15:45.613 05:10:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:45.613 05:10:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:15:45.613 05:10:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:45.613 05:10:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 01:15:45.613 { 01:15:45.613 "aliases": [ 01:15:45.613 "a88c5891-c0b9-41fa-9caa-0f2bac4b624a" 01:15:45.613 ], 01:15:45.613 "assigned_rate_limits": { 01:15:45.613 "r_mbytes_per_sec": 0, 01:15:45.613 "rw_ios_per_sec": 0, 01:15:45.613 "rw_mbytes_per_sec": 0, 01:15:45.613 "w_mbytes_per_sec": 0 01:15:45.613 }, 01:15:45.613 "block_size": 4096, 01:15:45.613 "claimed": false, 01:15:45.613 "driver_specific": {}, 01:15:45.613 "memory_domains": [ 01:15:45.613 { 01:15:45.613 "dma_device_id": "system", 01:15:45.613 "dma_device_type": 1 01:15:45.613 }, 01:15:45.613 { 01:15:45.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:15:45.613 "dma_device_type": 2 01:15:45.613 } 01:15:45.613 ], 01:15:45.613 "name": "Malloc1", 01:15:45.613 "num_blocks": 256, 01:15:45.613 "product_name": "Malloc disk", 01:15:45.613 "supported_io_types": { 01:15:45.613 "abort": true, 01:15:45.613 "compare": false, 01:15:45.613 "compare_and_write": false, 01:15:45.613 "copy": true, 01:15:45.613 "flush": true, 01:15:45.613 "get_zone_info": false, 01:15:45.613 "nvme_admin": false, 01:15:45.613 "nvme_io": false, 01:15:45.613 "nvme_io_md": false, 01:15:45.613 "nvme_iov_md": false, 01:15:45.613 "read": true, 01:15:45.613 "reset": true, 01:15:45.613 "seek_data": false, 01:15:45.613 "seek_hole": false, 01:15:45.613 "unmap": true, 01:15:45.613 "write": true, 01:15:45.613 "write_zeroes": true, 01:15:45.613 "zcopy": true, 01:15:45.613 "zone_append": false, 01:15:45.613 "zone_management": false 01:15:45.613 }, 01:15:45.613 "uuid": "a88c5891-c0b9-41fa-9caa-0f2bac4b624a", 01:15:45.613 "zoned": false 01:15:45.613 } 01:15:45.613 ]' 01:15:45.613 05:10:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 01:15:45.613 05:10:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 01:15:45.613 05:10:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 01:15:45.613 05:10:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:45.613 05:10:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:15:45.613 05:10:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:45.613 05:10:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 01:15:45.613 05:10:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:45.613 05:10:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:15:45.613 05:10:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:45.613 05:10:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 01:15:45.613 05:10:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 01:15:45.872 05:10:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 01:15:45.872 01:15:45.872 real 0m0.106s 01:15:45.872 user 0m0.061s 01:15:45.872 sys 0m0.017s 01:15:45.872 ************************************ 01:15:45.872 END TEST rpc_plugins 01:15:45.872 ************************************ 01:15:45.872 05:10:37 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:45.872 05:10:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:15:45.872 05:10:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 01:15:45.872 05:10:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:45.872 05:10:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:45.872 05:10:37 rpc -- common/autotest_common.sh@10 -- # set +x 01:15:45.872 ************************************ 01:15:45.872 START TEST rpc_trace_cmd_test 01:15:45.872 ************************************ 01:15:45.872 05:10:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 01:15:45.872 05:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 01:15:45.872 05:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 01:15:45.872 05:10:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:45.872 05:10:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 01:15:45.872 05:10:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:45.872 05:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 01:15:45.872 "bdev": { 01:15:45.872 "mask": "0x8", 01:15:45.872 "tpoint_mask": "0xffffffffffffffff" 01:15:45.872 }, 01:15:45.872 "bdev_nvme": { 01:15:45.872 "mask": "0x4000", 01:15:45.872 "tpoint_mask": "0x0" 01:15:45.872 }, 01:15:45.872 "bdev_raid": { 01:15:45.872 "mask": "0x20000", 01:15:45.872 "tpoint_mask": "0x0" 01:15:45.872 }, 01:15:45.872 "blob": { 01:15:45.872 "mask": "0x10000", 01:15:45.872 "tpoint_mask": "0x0" 01:15:45.872 }, 01:15:45.872 "blobfs": { 01:15:45.872 "mask": "0x80", 01:15:45.872 "tpoint_mask": "0x0" 01:15:45.872 }, 01:15:45.872 "dsa": { 01:15:45.872 "mask": "0x200", 01:15:45.872 "tpoint_mask": "0x0" 01:15:45.872 }, 01:15:45.872 "ftl": { 01:15:45.872 "mask": "0x40", 01:15:45.872 "tpoint_mask": "0x0" 01:15:45.872 }, 01:15:45.872 "iaa": { 01:15:45.872 "mask": "0x1000", 01:15:45.872 "tpoint_mask": "0x0" 01:15:45.872 }, 01:15:45.872 "iscsi_conn": { 01:15:45.872 "mask": "0x2", 01:15:45.872 "tpoint_mask": "0x0" 01:15:45.872 }, 01:15:45.872 "nvme_pcie": { 01:15:45.872 "mask": "0x800", 01:15:45.872 "tpoint_mask": "0x0" 01:15:45.872 }, 01:15:45.872 "nvme_tcp": { 01:15:45.872 "mask": "0x2000", 01:15:45.872 "tpoint_mask": "0x0" 01:15:45.872 }, 01:15:45.872 "nvmf_rdma": { 01:15:45.872 "mask": "0x10", 01:15:45.872 "tpoint_mask": "0x0" 01:15:45.872 }, 01:15:45.872 "nvmf_tcp": { 01:15:45.872 "mask": "0x20", 01:15:45.872 "tpoint_mask": "0x0" 01:15:45.872 }, 01:15:45.872 "scheduler": { 01:15:45.872 "mask": "0x40000", 01:15:45.872 "tpoint_mask": "0x0" 01:15:45.872 }, 01:15:45.872 "scsi": { 01:15:45.872 "mask": "0x4", 01:15:45.872 "tpoint_mask": "0x0" 01:15:45.872 }, 01:15:45.872 "sock": { 01:15:45.872 "mask": "0x8000", 01:15:45.872 "tpoint_mask": "0x0" 01:15:45.872 }, 01:15:45.872 "thread": { 01:15:45.872 "mask": "0x400", 01:15:45.872 "tpoint_mask": "0x0" 01:15:45.872 }, 01:15:45.872 "tpoint_group_mask": "0x8", 01:15:45.872 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57751" 01:15:45.872 }' 01:15:45.872 05:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 01:15:45.872 05:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 01:15:45.872 05:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 01:15:45.872 05:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 01:15:45.872 05:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 01:15:45.872 05:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 01:15:45.872 05:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 01:15:45.872 05:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 01:15:45.872 05:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 01:15:45.872 05:10:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 01:15:45.872 01:15:45.872 real 0m0.153s 01:15:45.872 user 0m0.132s 01:15:45.872 sys 0m0.014s 01:15:45.872 ************************************ 01:15:45.872 END TEST rpc_trace_cmd_test 01:15:45.872 ************************************ 01:15:45.872 05:10:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:45.872 05:10:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 01:15:45.872 05:10:37 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 01:15:45.872 05:10:37 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 01:15:45.872 05:10:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:45.872 05:10:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:45.872 05:10:37 rpc -- common/autotest_common.sh@10 -- # set +x 01:15:45.872 ************************************ 01:15:45.872 START TEST go_rpc 01:15:45.872 ************************************ 01:15:45.872 05:10:37 rpc.go_rpc -- common/autotest_common.sh@1129 -- # go_rpc 01:15:45.872 05:10:37 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 01:15:46.132 05:10:37 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 01:15:46.132 05:10:37 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 01:15:46.132 05:10:37 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 01:15:46.132 05:10:37 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 01:15:46.132 05:10:37 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:46.132 05:10:37 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:46.132 05:10:37 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:46.132 05:10:37 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 01:15:46.132 05:10:37 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 01:15:46.132 05:10:37 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["ea0d129a-e78e-4aec-b0f6-f2917e873f3d"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"ea0d129a-e78e-4aec-b0f6-f2917e873f3d","zoned":false}]' 01:15:46.132 05:10:37 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 01:15:46.132 05:10:37 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 01:15:46.132 05:10:37 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 01:15:46.132 05:10:37 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:46.132 05:10:37 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:46.132 05:10:37 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:46.132 05:10:37 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 01:15:46.132 05:10:37 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 01:15:46.132 05:10:37 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 01:15:46.132 05:10:37 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 01:15:46.132 01:15:46.132 real 0m0.159s 01:15:46.132 user 0m0.088s 01:15:46.132 sys 0m0.035s 01:15:46.132 ************************************ 01:15:46.132 END TEST go_rpc 01:15:46.132 ************************************ 01:15:46.132 05:10:37 rpc.go_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:46.132 05:10:37 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:46.132 05:10:37 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 01:15:46.132 05:10:37 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 01:15:46.132 05:10:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:46.132 05:10:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:46.132 05:10:37 rpc -- common/autotest_common.sh@10 -- # set +x 01:15:46.132 ************************************ 01:15:46.132 START TEST rpc_daemon_integrity 01:15:46.132 ************************************ 01:15:46.132 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 01:15:46.132 05:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 01:15:46.132 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:46.132 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:46.132 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:46.132 05:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 01:15:46.132 05:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 01:15:46.132 05:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 01:15:46.132 05:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 01:15:46.132 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:46.132 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:46.132 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:46.132 05:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 01:15:46.132 05:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 01:15:46.132 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:46.132 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:46.132 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:46.132 05:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 01:15:46.132 { 01:15:46.132 "aliases": [ 01:15:46.132 "4a386299-e6b7-48be-9cb1-468103f717e4" 01:15:46.132 ], 01:15:46.132 "assigned_rate_limits": { 01:15:46.132 "r_mbytes_per_sec": 0, 01:15:46.132 "rw_ios_per_sec": 0, 01:15:46.132 "rw_mbytes_per_sec": 0, 01:15:46.132 "w_mbytes_per_sec": 0 01:15:46.132 }, 01:15:46.132 "block_size": 512, 01:15:46.132 "claimed": false, 01:15:46.132 "driver_specific": {}, 01:15:46.132 "memory_domains": [ 01:15:46.132 { 01:15:46.132 "dma_device_id": "system", 01:15:46.132 "dma_device_type": 1 01:15:46.132 }, 01:15:46.132 { 01:15:46.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:15:46.132 "dma_device_type": 2 01:15:46.132 } 01:15:46.132 ], 01:15:46.132 "name": "Malloc3", 01:15:46.132 "num_blocks": 16384, 01:15:46.132 "product_name": "Malloc disk", 01:15:46.132 "supported_io_types": { 01:15:46.132 "abort": true, 01:15:46.132 "compare": false, 01:15:46.132 "compare_and_write": false, 01:15:46.132 "copy": true, 01:15:46.132 "flush": true, 01:15:46.132 "get_zone_info": false, 01:15:46.132 "nvme_admin": false, 01:15:46.132 "nvme_io": false, 01:15:46.132 "nvme_io_md": false, 01:15:46.132 "nvme_iov_md": false, 01:15:46.132 "read": true, 01:15:46.132 "reset": true, 01:15:46.132 "seek_data": false, 01:15:46.132 "seek_hole": false, 01:15:46.132 "unmap": true, 01:15:46.132 "write": true, 01:15:46.132 "write_zeroes": true, 01:15:46.132 "zcopy": true, 01:15:46.132 "zone_append": false, 01:15:46.132 "zone_management": false 01:15:46.132 }, 01:15:46.132 "uuid": "4a386299-e6b7-48be-9cb1-468103f717e4", 01:15:46.132 "zoned": false 01:15:46.132 } 01:15:46.132 ]' 01:15:46.132 05:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:46.390 [2024-12-09 05:10:37.781614] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3 01:15:46.390 [2024-12-09 05:10:37.781649] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:15:46.390 [2024-12-09 05:10:37.781661] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e50700 01:15:46.390 [2024-12-09 05:10:37.781668] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:15:46.390 [2024-12-09 05:10:37.782842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:15:46.390 [2024-12-09 05:10:37.782868] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 01:15:46.390 Passthru0 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 01:15:46.390 { 01:15:46.390 "aliases": [ 01:15:46.390 "4a386299-e6b7-48be-9cb1-468103f717e4" 01:15:46.390 ], 01:15:46.390 "assigned_rate_limits": { 01:15:46.390 "r_mbytes_per_sec": 0, 01:15:46.390 "rw_ios_per_sec": 0, 01:15:46.390 "rw_mbytes_per_sec": 0, 01:15:46.390 "w_mbytes_per_sec": 0 01:15:46.390 }, 01:15:46.390 "block_size": 512, 01:15:46.390 "claim_type": "exclusive_write", 01:15:46.390 "claimed": true, 01:15:46.390 "driver_specific": {}, 01:15:46.390 "memory_domains": [ 01:15:46.390 { 01:15:46.390 "dma_device_id": "system", 01:15:46.390 "dma_device_type": 1 01:15:46.390 }, 01:15:46.390 { 01:15:46.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:15:46.390 "dma_device_type": 2 01:15:46.390 } 01:15:46.390 ], 01:15:46.390 "name": "Malloc3", 01:15:46.390 "num_blocks": 16384, 01:15:46.390 "product_name": "Malloc disk", 01:15:46.390 "supported_io_types": { 01:15:46.390 "abort": true, 01:15:46.390 "compare": false, 01:15:46.390 "compare_and_write": false, 01:15:46.390 "copy": true, 01:15:46.390 "flush": true, 01:15:46.390 "get_zone_info": false, 01:15:46.390 "nvme_admin": false, 01:15:46.390 "nvme_io": false, 01:15:46.390 "nvme_io_md": false, 01:15:46.390 "nvme_iov_md": false, 01:15:46.390 "read": true, 01:15:46.390 "reset": true, 01:15:46.390 "seek_data": false, 01:15:46.390 "seek_hole": false, 01:15:46.390 "unmap": true, 01:15:46.390 "write": true, 01:15:46.390 "write_zeroes": true, 01:15:46.390 "zcopy": true, 01:15:46.390 "zone_append": false, 01:15:46.390 "zone_management": false 01:15:46.390 }, 01:15:46.390 "uuid": "4a386299-e6b7-48be-9cb1-468103f717e4", 01:15:46.390 "zoned": false 01:15:46.390 }, 01:15:46.390 { 01:15:46.390 "aliases": [ 01:15:46.390 "82d93ea6-b4ec-52c9-aacb-965b7bbbe406" 01:15:46.390 ], 01:15:46.390 "assigned_rate_limits": { 01:15:46.390 "r_mbytes_per_sec": 0, 01:15:46.390 "rw_ios_per_sec": 0, 01:15:46.390 "rw_mbytes_per_sec": 0, 01:15:46.390 "w_mbytes_per_sec": 0 01:15:46.390 }, 01:15:46.390 "block_size": 512, 01:15:46.390 "claimed": false, 01:15:46.390 "driver_specific": { 01:15:46.390 "passthru": { 01:15:46.390 "base_bdev_name": "Malloc3", 01:15:46.390 "name": "Passthru0" 01:15:46.390 } 01:15:46.390 }, 01:15:46.390 "memory_domains": [ 01:15:46.390 { 01:15:46.390 "dma_device_id": "system", 01:15:46.390 "dma_device_type": 1 01:15:46.390 }, 01:15:46.390 { 01:15:46.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:15:46.390 "dma_device_type": 2 01:15:46.390 } 01:15:46.390 ], 01:15:46.390 "name": "Passthru0", 01:15:46.390 "num_blocks": 16384, 01:15:46.390 "product_name": "passthru", 01:15:46.390 "supported_io_types": { 01:15:46.390 "abort": true, 01:15:46.390 "compare": false, 01:15:46.390 "compare_and_write": false, 01:15:46.390 "copy": true, 01:15:46.390 "flush": true, 01:15:46.390 "get_zone_info": false, 01:15:46.390 "nvme_admin": false, 01:15:46.390 "nvme_io": false, 01:15:46.390 "nvme_io_md": false, 01:15:46.390 "nvme_iov_md": false, 01:15:46.390 "read": true, 01:15:46.390 "reset": true, 01:15:46.390 "seek_data": false, 01:15:46.390 "seek_hole": false, 01:15:46.390 "unmap": true, 01:15:46.390 "write": true, 01:15:46.390 "write_zeroes": true, 01:15:46.390 "zcopy": true, 01:15:46.390 "zone_append": false, 01:15:46.390 "zone_management": false 01:15:46.390 }, 01:15:46.390 "uuid": "82d93ea6-b4ec-52c9-aacb-965b7bbbe406", 01:15:46.390 "zoned": false 01:15:46.390 } 01:15:46.390 ]' 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 01:15:46.390 01:15:46.390 real 0m0.232s 01:15:46.390 user 0m0.140s 01:15:46.390 sys 0m0.028s 01:15:46.390 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:46.390 ************************************ 01:15:46.391 END TEST rpc_daemon_integrity 01:15:46.391 ************************************ 01:15:46.391 05:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:15:46.391 05:10:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:15:46.391 05:10:37 rpc -- rpc/rpc.sh@84 -- # killprocess 57751 01:15:46.391 05:10:37 rpc -- common/autotest_common.sh@954 -- # '[' -z 57751 ']' 01:15:46.391 05:10:37 rpc -- common/autotest_common.sh@958 -- # kill -0 57751 01:15:46.391 05:10:37 rpc -- common/autotest_common.sh@959 -- # uname 01:15:46.391 05:10:37 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:46.391 05:10:37 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57751 01:15:46.391 05:10:37 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:46.391 killing process with pid 57751 01:15:46.391 05:10:37 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:46.391 05:10:37 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57751' 01:15:46.391 05:10:37 rpc -- common/autotest_common.sh@973 -- # kill 57751 01:15:46.391 05:10:37 rpc -- common/autotest_common.sh@978 -- # wait 57751 01:15:46.647 01:15:46.647 real 0m2.347s 01:15:46.647 user 0m2.995s 01:15:46.647 sys 0m0.514s 01:15:46.647 05:10:38 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:46.647 05:10:38 rpc -- common/autotest_common.sh@10 -- # set +x 01:15:46.647 ************************************ 01:15:46.647 END TEST rpc 01:15:46.647 ************************************ 01:15:46.648 05:10:38 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 01:15:46.648 05:10:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:46.648 05:10:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:46.648 05:10:38 -- common/autotest_common.sh@10 -- # set +x 01:15:46.648 ************************************ 01:15:46.648 START TEST skip_rpc 01:15:46.648 ************************************ 01:15:46.648 05:10:38 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 01:15:46.906 * Looking for test storage... 01:15:46.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 01:15:46.906 05:10:38 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:15:46.906 05:10:38 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 01:15:46.906 05:10:38 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:15:46.906 05:10:38 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@345 -- # : 1 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@365 -- # decimal 1 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@353 -- # local d=1 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@355 -- # echo 1 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@366 -- # decimal 2 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@353 -- # local d=2 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@355 -- # echo 2 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:46.906 05:10:38 skip_rpc -- scripts/common.sh@368 -- # return 0 01:15:46.906 05:10:38 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:46.906 05:10:38 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:15:46.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:46.906 --rc genhtml_branch_coverage=1 01:15:46.906 --rc genhtml_function_coverage=1 01:15:46.906 --rc genhtml_legend=1 01:15:46.906 --rc geninfo_all_blocks=1 01:15:46.906 --rc geninfo_unexecuted_blocks=1 01:15:46.906 01:15:46.906 ' 01:15:46.906 05:10:38 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:15:46.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:46.906 --rc genhtml_branch_coverage=1 01:15:46.906 --rc genhtml_function_coverage=1 01:15:46.906 --rc genhtml_legend=1 01:15:46.906 --rc geninfo_all_blocks=1 01:15:46.906 --rc geninfo_unexecuted_blocks=1 01:15:46.906 01:15:46.906 ' 01:15:46.906 05:10:38 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:15:46.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:46.906 --rc genhtml_branch_coverage=1 01:15:46.906 --rc genhtml_function_coverage=1 01:15:46.906 --rc genhtml_legend=1 01:15:46.906 --rc geninfo_all_blocks=1 01:15:46.906 --rc geninfo_unexecuted_blocks=1 01:15:46.906 01:15:46.906 ' 01:15:46.906 05:10:38 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:15:46.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:46.906 --rc genhtml_branch_coverage=1 01:15:46.906 --rc genhtml_function_coverage=1 01:15:46.906 --rc genhtml_legend=1 01:15:46.906 --rc geninfo_all_blocks=1 01:15:46.906 --rc geninfo_unexecuted_blocks=1 01:15:46.906 01:15:46.906 ' 01:15:46.906 05:10:38 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:15:46.906 05:10:38 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 01:15:46.906 05:10:38 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 01:15:46.906 05:10:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:46.906 05:10:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:46.906 05:10:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:46.906 ************************************ 01:15:46.906 START TEST skip_rpc 01:15:46.906 ************************************ 01:15:46.906 05:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 01:15:46.906 05:10:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58014 01:15:46.906 05:10:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:15:46.906 05:10:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 01:15:46.906 05:10:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 01:15:46.906 [2024-12-09 05:10:38.407469] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:15:46.907 [2024-12-09 05:10:38.407531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58014 ] 01:15:47.165 [2024-12-09 05:10:38.538424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:47.166 [2024-12-09 05:10:38.573178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:52.466 2024/12/09 05:10:43 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58014 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58014 ']' 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58014 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58014 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:52.466 killing process with pid 58014 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58014' 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58014 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58014 01:15:52.466 01:15:52.466 real 0m5.249s 01:15:52.466 user 0m4.988s 01:15:52.466 sys 0m0.160s 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:52.466 05:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:52.466 ************************************ 01:15:52.466 END TEST skip_rpc 01:15:52.466 ************************************ 01:15:52.466 05:10:43 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 01:15:52.466 05:10:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:52.466 05:10:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:52.466 05:10:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:52.466 ************************************ 01:15:52.466 START TEST skip_rpc_with_json 01:15:52.466 ************************************ 01:15:52.466 05:10:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 01:15:52.466 05:10:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 01:15:52.466 05:10:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58101 01:15:52.466 05:10:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:15:52.466 05:10:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58101 01:15:52.466 05:10:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58101 ']' 01:15:52.466 05:10:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:15:52.466 05:10:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:52.466 05:10:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:52.466 05:10:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:52.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:52.466 05:10:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:52.466 05:10:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:15:52.466 [2024-12-09 05:10:43.699573] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:15:52.466 [2024-12-09 05:10:43.699634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58101 ] 01:15:52.466 [2024-12-09 05:10:43.833665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:52.466 [2024-12-09 05:10:43.867817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:53.033 05:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:53.033 05:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 01:15:53.033 05:10:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 01:15:53.033 05:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:53.033 05:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:15:53.033 [2024-12-09 05:10:44.589832] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 01:15:53.033 2024/12/09 05:10:44 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 01:15:53.033 request: 01:15:53.033 { 01:15:53.033 "method": "nvmf_get_transports", 01:15:53.033 "params": { 01:15:53.033 "trtype": "tcp" 01:15:53.033 } 01:15:53.033 } 01:15:53.033 Got JSON-RPC error response 01:15:53.033 GoRPCClient: error on JSON-RPC call 01:15:53.033 05:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:15:53.033 05:10:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 01:15:53.033 05:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:53.033 05:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:15:53.033 [2024-12-09 05:10:44.597911] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:15:53.033 05:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:53.033 05:10:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 01:15:53.033 05:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:53.033 05:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:15:53.291 05:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:53.291 05:10:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:15:53.291 { 01:15:53.291 "subsystems": [ 01:15:53.291 { 01:15:53.291 "subsystem": "fsdev", 01:15:53.291 "config": [ 01:15:53.291 { 01:15:53.291 "method": "fsdev_set_opts", 01:15:53.291 "params": { 01:15:53.291 "fsdev_io_cache_size": 256, 01:15:53.291 "fsdev_io_pool_size": 65535 01:15:53.291 } 01:15:53.291 } 01:15:53.291 ] 01:15:53.291 }, 01:15:53.291 { 01:15:53.291 "subsystem": "keyring", 01:15:53.291 "config": [] 01:15:53.291 }, 01:15:53.291 { 01:15:53.291 "subsystem": "iobuf", 01:15:53.291 "config": [ 01:15:53.291 { 01:15:53.291 "method": "iobuf_set_options", 01:15:53.291 "params": { 01:15:53.291 "enable_numa": false, 01:15:53.291 "large_bufsize": 135168, 01:15:53.291 "large_pool_count": 1024, 01:15:53.291 "small_bufsize": 8192, 01:15:53.291 "small_pool_count": 8192 01:15:53.291 } 01:15:53.291 } 01:15:53.291 ] 01:15:53.291 }, 01:15:53.291 { 01:15:53.291 "subsystem": "sock", 01:15:53.291 "config": [ 01:15:53.291 { 01:15:53.291 "method": "sock_set_default_impl", 01:15:53.291 "params": { 01:15:53.291 "impl_name": "posix" 01:15:53.291 } 01:15:53.291 }, 01:15:53.291 { 01:15:53.291 "method": "sock_impl_set_options", 01:15:53.291 "params": { 01:15:53.291 "enable_ktls": false, 01:15:53.291 "enable_placement_id": 0, 01:15:53.291 "enable_quickack": false, 01:15:53.291 "enable_recv_pipe": true, 01:15:53.291 "enable_zerocopy_send_client": false, 01:15:53.291 "enable_zerocopy_send_server": true, 01:15:53.291 "impl_name": "ssl", 01:15:53.291 "recv_buf_size": 4096, 01:15:53.291 "send_buf_size": 4096, 01:15:53.291 "tls_version": 0, 01:15:53.291 "zerocopy_threshold": 0 01:15:53.291 } 01:15:53.291 }, 01:15:53.291 { 01:15:53.291 "method": "sock_impl_set_options", 01:15:53.291 "params": { 01:15:53.291 "enable_ktls": false, 01:15:53.291 "enable_placement_id": 0, 01:15:53.291 "enable_quickack": false, 01:15:53.291 "enable_recv_pipe": true, 01:15:53.291 "enable_zerocopy_send_client": false, 01:15:53.291 "enable_zerocopy_send_server": true, 01:15:53.291 "impl_name": "posix", 01:15:53.291 "recv_buf_size": 2097152, 01:15:53.291 "send_buf_size": 2097152, 01:15:53.291 "tls_version": 0, 01:15:53.291 "zerocopy_threshold": 0 01:15:53.291 } 01:15:53.291 } 01:15:53.291 ] 01:15:53.291 }, 01:15:53.291 { 01:15:53.291 "subsystem": "vmd", 01:15:53.291 "config": [] 01:15:53.291 }, 01:15:53.291 { 01:15:53.291 "subsystem": "accel", 01:15:53.291 "config": [ 01:15:53.291 { 01:15:53.291 "method": "accel_set_options", 01:15:53.291 "params": { 01:15:53.291 "buf_count": 2048, 01:15:53.291 "large_cache_size": 16, 01:15:53.291 "sequence_count": 2048, 01:15:53.291 "small_cache_size": 128, 01:15:53.291 "task_count": 2048 01:15:53.291 } 01:15:53.291 } 01:15:53.291 ] 01:15:53.291 }, 01:15:53.291 { 01:15:53.291 "subsystem": "bdev", 01:15:53.291 "config": [ 01:15:53.291 { 01:15:53.291 "method": "bdev_set_options", 01:15:53.291 "params": { 01:15:53.291 "bdev_auto_examine": true, 01:15:53.291 "bdev_io_cache_size": 256, 01:15:53.291 "bdev_io_pool_size": 65535, 01:15:53.291 "iobuf_large_cache_size": 16, 01:15:53.291 "iobuf_small_cache_size": 128 01:15:53.291 } 01:15:53.291 }, 01:15:53.291 { 01:15:53.291 "method": "bdev_raid_set_options", 01:15:53.291 "params": { 01:15:53.291 "process_max_bandwidth_mb_sec": 0, 01:15:53.291 "process_window_size_kb": 1024 01:15:53.291 } 01:15:53.291 }, 01:15:53.291 { 01:15:53.291 "method": "bdev_iscsi_set_options", 01:15:53.291 "params": { 01:15:53.291 "timeout_sec": 30 01:15:53.291 } 01:15:53.291 }, 01:15:53.291 { 01:15:53.291 "method": "bdev_nvme_set_options", 01:15:53.291 "params": { 01:15:53.291 "action_on_timeout": "none", 01:15:53.291 "allow_accel_sequence": false, 01:15:53.291 "arbitration_burst": 0, 01:15:53.291 "bdev_retry_count": 3, 01:15:53.291 "ctrlr_loss_timeout_sec": 0, 01:15:53.291 "delay_cmd_submit": true, 01:15:53.291 "dhchap_dhgroups": [ 01:15:53.291 "null", 01:15:53.291 "ffdhe2048", 01:15:53.291 "ffdhe3072", 01:15:53.291 "ffdhe4096", 01:15:53.291 "ffdhe6144", 01:15:53.291 "ffdhe8192" 01:15:53.291 ], 01:15:53.291 "dhchap_digests": [ 01:15:53.291 "sha256", 01:15:53.291 "sha384", 01:15:53.291 "sha512" 01:15:53.291 ], 01:15:53.291 "disable_auto_failback": false, 01:15:53.291 "fast_io_fail_timeout_sec": 0, 01:15:53.291 "generate_uuids": false, 01:15:53.291 "high_priority_weight": 0, 01:15:53.291 "io_path_stat": false, 01:15:53.291 "io_queue_requests": 0, 01:15:53.291 "keep_alive_timeout_ms": 10000, 01:15:53.291 "low_priority_weight": 0, 01:15:53.291 "medium_priority_weight": 0, 01:15:53.291 "nvme_adminq_poll_period_us": 10000, 01:15:53.291 "nvme_error_stat": false, 01:15:53.291 "nvme_ioq_poll_period_us": 0, 01:15:53.291 "rdma_cm_event_timeout_ms": 0, 01:15:53.291 "rdma_max_cq_size": 0, 01:15:53.291 "rdma_srq_size": 0, 01:15:53.291 "reconnect_delay_sec": 0, 01:15:53.291 "timeout_admin_us": 0, 01:15:53.291 "timeout_us": 0, 01:15:53.291 "transport_ack_timeout": 0, 01:15:53.291 "transport_retry_count": 4, 01:15:53.291 "transport_tos": 0 01:15:53.291 } 01:15:53.291 }, 01:15:53.291 { 01:15:53.291 "method": "bdev_nvme_set_hotplug", 01:15:53.291 "params": { 01:15:53.291 "enable": false, 01:15:53.291 "period_us": 100000 01:15:53.291 } 01:15:53.291 }, 01:15:53.291 { 01:15:53.291 "method": "bdev_wait_for_examine" 01:15:53.291 } 01:15:53.291 ] 01:15:53.291 }, 01:15:53.291 { 01:15:53.291 "subsystem": "scsi", 01:15:53.291 "config": null 01:15:53.291 }, 01:15:53.291 { 01:15:53.291 "subsystem": "scheduler", 01:15:53.291 "config": [ 01:15:53.291 { 01:15:53.291 "method": "framework_set_scheduler", 01:15:53.292 "params": { 01:15:53.292 "name": "static" 01:15:53.292 } 01:15:53.292 } 01:15:53.292 ] 01:15:53.292 }, 01:15:53.292 { 01:15:53.292 "subsystem": "vhost_scsi", 01:15:53.292 "config": [] 01:15:53.292 }, 01:15:53.292 { 01:15:53.292 "subsystem": "vhost_blk", 01:15:53.292 "config": [] 01:15:53.292 }, 01:15:53.292 { 01:15:53.292 "subsystem": "ublk", 01:15:53.292 "config": [] 01:15:53.292 }, 01:15:53.292 { 01:15:53.292 "subsystem": "nbd", 01:15:53.292 "config": [] 01:15:53.292 }, 01:15:53.292 { 01:15:53.292 "subsystem": "nvmf", 01:15:53.292 "config": [ 01:15:53.292 { 01:15:53.292 "method": "nvmf_set_config", 01:15:53.292 "params": { 01:15:53.292 "admin_cmd_passthru": { 01:15:53.292 "identify_ctrlr": false 01:15:53.292 }, 01:15:53.292 "dhchap_dhgroups": [ 01:15:53.292 "null", 01:15:53.292 "ffdhe2048", 01:15:53.292 "ffdhe3072", 01:15:53.292 "ffdhe4096", 01:15:53.292 "ffdhe6144", 01:15:53.292 "ffdhe8192" 01:15:53.292 ], 01:15:53.292 "dhchap_digests": [ 01:15:53.292 "sha256", 01:15:53.292 "sha384", 01:15:53.292 "sha512" 01:15:53.292 ], 01:15:53.292 "discovery_filter": "match_any" 01:15:53.292 } 01:15:53.292 }, 01:15:53.292 { 01:15:53.292 "method": "nvmf_set_max_subsystems", 01:15:53.292 "params": { 01:15:53.292 "max_subsystems": 1024 01:15:53.292 } 01:15:53.292 }, 01:15:53.292 { 01:15:53.292 "method": "nvmf_set_crdt", 01:15:53.292 "params": { 01:15:53.292 "crdt1": 0, 01:15:53.292 "crdt2": 0, 01:15:53.292 "crdt3": 0 01:15:53.292 } 01:15:53.292 }, 01:15:53.292 { 01:15:53.292 "method": "nvmf_create_transport", 01:15:53.292 "params": { 01:15:53.292 "abort_timeout_sec": 1, 01:15:53.292 "ack_timeout": 0, 01:15:53.292 "buf_cache_size": 4294967295, 01:15:53.292 "c2h_success": true, 01:15:53.292 "data_wr_pool_size": 0, 01:15:53.292 "dif_insert_or_strip": false, 01:15:53.292 "in_capsule_data_size": 4096, 01:15:53.292 "io_unit_size": 131072, 01:15:53.292 "max_aq_depth": 128, 01:15:53.292 "max_io_qpairs_per_ctrlr": 127, 01:15:53.292 "max_io_size": 131072, 01:15:53.292 "max_queue_depth": 128, 01:15:53.292 "num_shared_buffers": 511, 01:15:53.292 "sock_priority": 0, 01:15:53.292 "trtype": "TCP", 01:15:53.292 "zcopy": false 01:15:53.292 } 01:15:53.292 } 01:15:53.292 ] 01:15:53.292 }, 01:15:53.292 { 01:15:53.292 "subsystem": "iscsi", 01:15:53.292 "config": [ 01:15:53.292 { 01:15:53.292 "method": "iscsi_set_options", 01:15:53.292 "params": { 01:15:53.292 "allow_duplicated_isid": false, 01:15:53.292 "chap_group": 0, 01:15:53.292 "data_out_pool_size": 2048, 01:15:53.292 "default_time2retain": 20, 01:15:53.292 "default_time2wait": 2, 01:15:53.292 "disable_chap": false, 01:15:53.292 "error_recovery_level": 0, 01:15:53.292 "first_burst_length": 8192, 01:15:53.292 "immediate_data": true, 01:15:53.292 "immediate_data_pool_size": 16384, 01:15:53.292 "max_connections_per_session": 2, 01:15:53.292 "max_large_datain_per_connection": 64, 01:15:53.292 "max_queue_depth": 64, 01:15:53.292 "max_r2t_per_connection": 4, 01:15:53.292 "max_sessions": 128, 01:15:53.292 "mutual_chap": false, 01:15:53.292 "node_base": "iqn.2016-06.io.spdk", 01:15:53.292 "nop_in_interval": 30, 01:15:53.292 "nop_timeout": 60, 01:15:53.292 "pdu_pool_size": 36864, 01:15:53.292 "require_chap": false 01:15:53.292 } 01:15:53.292 } 01:15:53.292 ] 01:15:53.292 } 01:15:53.292 ] 01:15:53.292 } 01:15:53.292 05:10:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 01:15:53.292 05:10:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58101 01:15:53.292 05:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58101 ']' 01:15:53.292 05:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58101 01:15:53.292 05:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 01:15:53.292 05:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:53.292 05:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58101 01:15:53.292 05:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:53.292 killing process with pid 58101 01:15:53.292 05:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:53.292 05:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58101' 01:15:53.292 05:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58101 01:15:53.292 05:10:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58101 01:15:53.564 05:10:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58135 01:15:53.564 05:10:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 01:15:53.564 05:10:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58135 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58135 ']' 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58135 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58135 01:15:58.823 killing process with pid 58135 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58135' 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58135 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58135 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 01:15:58.823 01:15:58.823 real 0m6.609s 01:15:58.823 user 0m6.453s 01:15:58.823 sys 0m0.408s 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:15:58.823 ************************************ 01:15:58.823 END TEST skip_rpc_with_json 01:15:58.823 ************************************ 01:15:58.823 05:10:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 01:15:58.823 05:10:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:58.823 05:10:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:58.823 05:10:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:58.823 ************************************ 01:15:58.823 START TEST skip_rpc_with_delay 01:15:58.823 ************************************ 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 01:15:58.823 05:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 01:15:58.824 [2024-12-09 05:10:50.346810] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 01:15:58.824 05:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 01:15:58.824 05:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:15:58.824 05:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:15:58.824 05:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:15:58.824 01:15:58.824 real 0m0.062s 01:15:58.824 user 0m0.032s 01:15:58.824 sys 0m0.029s 01:15:58.824 05:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:58.824 05:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 01:15:58.824 ************************************ 01:15:58.824 END TEST skip_rpc_with_delay 01:15:58.824 ************************************ 01:15:58.824 05:10:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 01:15:58.824 05:10:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 01:15:58.824 05:10:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 01:15:58.824 05:10:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:58.824 05:10:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:58.824 05:10:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:58.824 ************************************ 01:15:58.824 START TEST exit_on_failed_rpc_init 01:15:58.824 ************************************ 01:15:58.824 05:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 01:15:58.824 05:10:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58245 01:15:58.824 05:10:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58245 01:15:58.824 05:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58245 ']' 01:15:58.824 05:10:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:15:58.824 05:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:58.824 05:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:58.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:58.824 05:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:58.824 05:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:58.824 05:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 01:15:59.082 [2024-12-09 05:10:50.442025] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:15:59.082 [2024-12-09 05:10:50.442086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58245 ] 01:15:59.082 [2024-12-09 05:10:50.576790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:59.082 [2024-12-09 05:10:50.606392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:00.017 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:00.017 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 01:16:00.017 05:10:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:16:00.017 05:10:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 01:16:00.017 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 01:16:00.017 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 01:16:00.017 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:16:00.017 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:00.017 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:16:00.017 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:00.017 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:16:00.017 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:00.017 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:16:00.017 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 01:16:00.017 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 01:16:00.017 [2024-12-09 05:10:51.357709] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:00.017 [2024-12-09 05:10:51.357772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58275 ] 01:16:00.017 [2024-12-09 05:10:51.498166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:00.017 [2024-12-09 05:10:51.532876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:16:00.017 [2024-12-09 05:10:51.532938] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 01:16:00.017 [2024-12-09 05:10:51.532945] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 01:16:00.017 [2024-12-09 05:10:51.532951] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:16:00.017 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 01:16:00.017 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:16:00.017 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 01:16:00.017 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 01:16:00.017 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 01:16:00.017 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:16:00.017 05:10:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 01:16:00.018 05:10:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58245 01:16:00.018 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58245 ']' 01:16:00.018 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58245 01:16:00.018 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 01:16:00.018 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:00.018 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58245 01:16:00.276 killing process with pid 58245 01:16:00.276 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:00.276 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:00.276 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58245' 01:16:00.276 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58245 01:16:00.276 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58245 01:16:00.276 01:16:00.276 real 0m1.449s 01:16:00.276 user 0m1.720s 01:16:00.276 sys 0m0.240s 01:16:00.276 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:00.276 05:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 01:16:00.276 ************************************ 01:16:00.276 END TEST exit_on_failed_rpc_init 01:16:00.276 ************************************ 01:16:00.276 05:10:51 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:16:00.276 01:16:00.276 real 0m13.654s 01:16:00.276 user 0m13.326s 01:16:00.276 sys 0m0.994s 01:16:00.276 05:10:51 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:00.276 05:10:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:16:00.276 ************************************ 01:16:00.276 END TEST skip_rpc 01:16:00.276 ************************************ 01:16:00.536 05:10:51 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 01:16:00.536 05:10:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:00.536 05:10:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:00.536 05:10:51 -- common/autotest_common.sh@10 -- # set +x 01:16:00.536 ************************************ 01:16:00.536 START TEST rpc_client 01:16:00.536 ************************************ 01:16:00.536 05:10:51 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 01:16:00.536 * Looking for test storage... 01:16:00.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 01:16:00.536 05:10:51 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:16:00.536 05:10:51 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:16:00.536 05:10:51 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 01:16:00.536 05:10:52 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@344 -- # case "$op" in 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@345 -- # : 1 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@365 -- # decimal 1 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@353 -- # local d=1 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@355 -- # echo 1 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@366 -- # decimal 2 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@353 -- # local d=2 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@355 -- # echo 2 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:16:00.536 05:10:52 rpc_client -- scripts/common.sh@368 -- # return 0 01:16:00.536 05:10:52 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:16:00.536 05:10:52 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:16:00.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:00.536 --rc genhtml_branch_coverage=1 01:16:00.536 --rc genhtml_function_coverage=1 01:16:00.536 --rc genhtml_legend=1 01:16:00.536 --rc geninfo_all_blocks=1 01:16:00.536 --rc geninfo_unexecuted_blocks=1 01:16:00.536 01:16:00.536 ' 01:16:00.536 05:10:52 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:16:00.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:00.536 --rc genhtml_branch_coverage=1 01:16:00.536 --rc genhtml_function_coverage=1 01:16:00.536 --rc genhtml_legend=1 01:16:00.536 --rc geninfo_all_blocks=1 01:16:00.536 --rc geninfo_unexecuted_blocks=1 01:16:00.536 01:16:00.536 ' 01:16:00.536 05:10:52 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:16:00.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:00.536 --rc genhtml_branch_coverage=1 01:16:00.536 --rc genhtml_function_coverage=1 01:16:00.536 --rc genhtml_legend=1 01:16:00.536 --rc geninfo_all_blocks=1 01:16:00.536 --rc geninfo_unexecuted_blocks=1 01:16:00.536 01:16:00.536 ' 01:16:00.536 05:10:52 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:16:00.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:00.536 --rc genhtml_branch_coverage=1 01:16:00.536 --rc genhtml_function_coverage=1 01:16:00.536 --rc genhtml_legend=1 01:16:00.536 --rc geninfo_all_blocks=1 01:16:00.536 --rc geninfo_unexecuted_blocks=1 01:16:00.536 01:16:00.536 ' 01:16:00.536 05:10:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 01:16:00.536 OK 01:16:00.536 05:10:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 01:16:00.536 01:16:00.536 real 0m0.160s 01:16:00.536 user 0m0.096s 01:16:00.536 sys 0m0.070s 01:16:00.536 05:10:52 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:00.536 05:10:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 01:16:00.536 ************************************ 01:16:00.536 END TEST rpc_client 01:16:00.536 ************************************ 01:16:00.536 05:10:52 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 01:16:00.536 05:10:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:00.536 05:10:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:00.536 05:10:52 -- common/autotest_common.sh@10 -- # set +x 01:16:00.536 ************************************ 01:16:00.536 START TEST json_config 01:16:00.536 ************************************ 01:16:00.536 05:10:52 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 01:16:00.796 05:10:52 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:16:00.796 05:10:52 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:16:00.796 05:10:52 json_config -- common/autotest_common.sh@1693 -- # lcov --version 01:16:00.796 05:10:52 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:16:00.796 05:10:52 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:16:00.796 05:10:52 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 01:16:00.796 05:10:52 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 01:16:00.796 05:10:52 json_config -- scripts/common.sh@336 -- # IFS=.-: 01:16:00.796 05:10:52 json_config -- scripts/common.sh@336 -- # read -ra ver1 01:16:00.796 05:10:52 json_config -- scripts/common.sh@337 -- # IFS=.-: 01:16:00.796 05:10:52 json_config -- scripts/common.sh@337 -- # read -ra ver2 01:16:00.796 05:10:52 json_config -- scripts/common.sh@338 -- # local 'op=<' 01:16:00.796 05:10:52 json_config -- scripts/common.sh@340 -- # ver1_l=2 01:16:00.796 05:10:52 json_config -- scripts/common.sh@341 -- # ver2_l=1 01:16:00.796 05:10:52 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:16:00.796 05:10:52 json_config -- scripts/common.sh@344 -- # case "$op" in 01:16:00.796 05:10:52 json_config -- scripts/common.sh@345 -- # : 1 01:16:00.796 05:10:52 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 01:16:00.796 05:10:52 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:16:00.796 05:10:52 json_config -- scripts/common.sh@365 -- # decimal 1 01:16:00.796 05:10:52 json_config -- scripts/common.sh@353 -- # local d=1 01:16:00.796 05:10:52 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:16:00.796 05:10:52 json_config -- scripts/common.sh@355 -- # echo 1 01:16:00.796 05:10:52 json_config -- scripts/common.sh@365 -- # ver1[v]=1 01:16:00.796 05:10:52 json_config -- scripts/common.sh@366 -- # decimal 2 01:16:00.796 05:10:52 json_config -- scripts/common.sh@353 -- # local d=2 01:16:00.796 05:10:52 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:16:00.796 05:10:52 json_config -- scripts/common.sh@355 -- # echo 2 01:16:00.796 05:10:52 json_config -- scripts/common.sh@366 -- # ver2[v]=2 01:16:00.796 05:10:52 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:16:00.796 05:10:52 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:16:00.796 05:10:52 json_config -- scripts/common.sh@368 -- # return 0 01:16:00.796 05:10:52 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:16:00.796 05:10:52 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:16:00.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:00.796 --rc genhtml_branch_coverage=1 01:16:00.796 --rc genhtml_function_coverage=1 01:16:00.796 --rc genhtml_legend=1 01:16:00.796 --rc geninfo_all_blocks=1 01:16:00.796 --rc geninfo_unexecuted_blocks=1 01:16:00.796 01:16:00.796 ' 01:16:00.796 05:10:52 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:16:00.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:00.796 --rc genhtml_branch_coverage=1 01:16:00.796 --rc genhtml_function_coverage=1 01:16:00.796 --rc genhtml_legend=1 01:16:00.796 --rc geninfo_all_blocks=1 01:16:00.796 --rc geninfo_unexecuted_blocks=1 01:16:00.796 01:16:00.796 ' 01:16:00.796 05:10:52 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:16:00.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:00.796 --rc genhtml_branch_coverage=1 01:16:00.796 --rc genhtml_function_coverage=1 01:16:00.796 --rc genhtml_legend=1 01:16:00.796 --rc geninfo_all_blocks=1 01:16:00.796 --rc geninfo_unexecuted_blocks=1 01:16:00.796 01:16:00.796 ' 01:16:00.796 05:10:52 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:16:00.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:00.796 --rc genhtml_branch_coverage=1 01:16:00.796 --rc genhtml_function_coverage=1 01:16:00.796 --rc genhtml_legend=1 01:16:00.796 --rc geninfo_all_blocks=1 01:16:00.796 --rc geninfo_unexecuted_blocks=1 01:16:00.796 01:16:00.796 ' 01:16:00.796 05:10:52 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:16:00.796 05:10:52 json_config -- nvmf/common.sh@7 -- # uname -s 01:16:00.796 05:10:52 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:16:00.796 05:10:52 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:16:00.796 05:10:52 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:16:00.796 05:10:52 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:16:00.796 05:10:52 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:16:00.796 05:10:52 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:16:00.796 05:10:52 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:16:00.796 05:10:52 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:16:00.796 05:10:52 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:16:00.796 05:10:52 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:16:00.796 05:10:52 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:16:00.796 05:10:52 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:16:00.796 05:10:52 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:16:00.796 05:10:52 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:16:00.796 05:10:52 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 01:16:00.796 05:10:52 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:16:00.796 05:10:52 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:16:00.796 05:10:52 json_config -- scripts/common.sh@15 -- # shopt -s extglob 01:16:00.796 05:10:52 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:16:00.796 05:10:52 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:16:00.796 05:10:52 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:16:00.796 05:10:52 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:00.796 05:10:52 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:00.796 05:10:52 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:00.796 05:10:52 json_config -- paths/export.sh@5 -- # export PATH 01:16:00.796 05:10:52 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:00.796 05:10:52 json_config -- nvmf/common.sh@51 -- # : 0 01:16:00.796 05:10:52 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:16:00.796 05:10:52 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:16:00.797 05:10:52 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:16:00.797 05:10:52 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:16:00.797 05:10:52 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:16:00.797 05:10:52 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:16:00.797 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:16:00.797 05:10:52 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:16:00.797 05:10:52 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:16:00.797 05:10:52 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 01:16:00.797 05:10:52 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 01:16:00.797 05:10:52 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 01:16:00.797 05:10:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 01:16:00.797 05:10:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 01:16:00.797 05:10:52 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 01:16:00.797 05:10:52 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 01:16:00.797 05:10:52 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 01:16:00.797 05:10:52 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 01:16:00.797 05:10:52 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 01:16:00.797 05:10:52 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 01:16:00.797 05:10:52 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 01:16:00.797 05:10:52 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 01:16:00.797 05:10:52 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 01:16:00.797 05:10:52 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 01:16:00.797 05:10:52 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 01:16:00.797 05:10:52 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 01:16:00.797 INFO: JSON configuration test init 01:16:00.797 05:10:52 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 01:16:00.797 05:10:52 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 01:16:00.797 05:10:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 01:16:00.797 05:10:52 json_config -- common/autotest_common.sh@10 -- # set +x 01:16:00.797 05:10:52 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 01:16:00.797 05:10:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 01:16:00.797 05:10:52 json_config -- common/autotest_common.sh@10 -- # set +x 01:16:00.797 05:10:52 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 01:16:00.797 05:10:52 json_config -- json_config/common.sh@9 -- # local app=target 01:16:00.797 05:10:52 json_config -- json_config/common.sh@10 -- # shift 01:16:00.797 05:10:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 01:16:00.797 05:10:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 01:16:00.797 05:10:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 01:16:00.797 05:10:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:16:00.797 05:10:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:16:00.797 05:10:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58403 01:16:00.797 05:10:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 01:16:00.797 Waiting for target to run... 01:16:00.797 05:10:52 json_config -- json_config/common.sh@25 -- # waitforlisten 58403 /var/tmp/spdk_tgt.sock 01:16:00.797 05:10:52 json_config -- common/autotest_common.sh@835 -- # '[' -z 58403 ']' 01:16:00.797 05:10:52 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 01:16:00.797 05:10:52 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:00.797 05:10:52 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 01:16:00.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 01:16:00.797 05:10:52 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:00.797 05:10:52 json_config -- common/autotest_common.sh@10 -- # set +x 01:16:00.797 05:10:52 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 01:16:00.797 [2024-12-09 05:10:52.309146] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:00.797 [2024-12-09 05:10:52.309228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58403 ] 01:16:01.055 [2024-12-09 05:10:52.594501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:01.056 [2024-12-09 05:10:52.622331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:01.622 01:16:01.622 05:10:53 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:01.622 05:10:53 json_config -- common/autotest_common.sh@868 -- # return 0 01:16:01.622 05:10:53 json_config -- json_config/common.sh@26 -- # echo '' 01:16:01.622 05:10:53 json_config -- json_config/json_config.sh@276 -- # create_accel_config 01:16:01.622 05:10:53 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 01:16:01.622 05:10:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 01:16:01.622 05:10:53 json_config -- common/autotest_common.sh@10 -- # set +x 01:16:01.622 05:10:53 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 01:16:01.622 05:10:53 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 01:16:01.622 05:10:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 01:16:01.622 05:10:53 json_config -- common/autotest_common.sh@10 -- # set +x 01:16:01.622 05:10:53 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 01:16:01.622 05:10:53 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 01:16:01.622 05:10:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 01:16:02.188 05:10:53 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 01:16:02.188 05:10:53 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 01:16:02.188 05:10:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 01:16:02.188 05:10:53 json_config -- common/autotest_common.sh@10 -- # set +x 01:16:02.189 05:10:53 json_config -- json_config/json_config.sh@45 -- # local ret=0 01:16:02.189 05:10:53 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 01:16:02.189 05:10:53 json_config -- json_config/json_config.sh@46 -- # local enabled_types 01:16:02.189 05:10:53 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 01:16:02.189 05:10:53 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 01:16:02.189 05:10:53 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 01:16:02.189 05:10:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 01:16:02.189 05:10:53 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 01:16:02.447 05:10:53 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 01:16:02.447 05:10:53 json_config -- json_config/json_config.sh@51 -- # local get_types 01:16:02.447 05:10:53 json_config -- json_config/json_config.sh@53 -- # local type_diff 01:16:02.447 05:10:53 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 01:16:02.447 05:10:53 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 01:16:02.447 05:10:53 json_config -- json_config/json_config.sh@54 -- # uniq -u 01:16:02.447 05:10:53 json_config -- json_config/json_config.sh@54 -- # sort 01:16:02.447 05:10:53 json_config -- json_config/json_config.sh@54 -- # type_diff= 01:16:02.447 05:10:53 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 01:16:02.447 05:10:53 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 01:16:02.447 05:10:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 01:16:02.447 05:10:53 json_config -- common/autotest_common.sh@10 -- # set +x 01:16:02.447 05:10:53 json_config -- json_config/json_config.sh@62 -- # return 0 01:16:02.447 05:10:53 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 01:16:02.447 05:10:53 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 01:16:02.447 05:10:53 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 01:16:02.447 05:10:53 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 01:16:02.447 05:10:53 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 01:16:02.447 05:10:53 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 01:16:02.447 05:10:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 01:16:02.447 05:10:53 json_config -- common/autotest_common.sh@10 -- # set +x 01:16:02.447 05:10:53 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 01:16:02.447 05:10:53 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 01:16:02.447 05:10:53 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 01:16:02.447 05:10:53 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 01:16:02.447 05:10:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 01:16:02.705 MallocForNvmf0 01:16:02.705 05:10:54 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 01:16:02.705 05:10:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 01:16:02.705 MallocForNvmf1 01:16:02.705 05:10:54 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 01:16:02.705 05:10:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 01:16:02.963 [2024-12-09 05:10:54.436361] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:16:02.963 05:10:54 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:16:02.963 05:10:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:16:03.222 05:10:54 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 01:16:03.222 05:10:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 01:16:03.481 05:10:54 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 01:16:03.481 05:10:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 01:16:03.481 05:10:55 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 01:16:03.481 05:10:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 01:16:03.738 [2024-12-09 05:10:55.244670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:16:03.738 05:10:55 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 01:16:03.738 05:10:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 01:16:03.738 05:10:55 json_config -- common/autotest_common.sh@10 -- # set +x 01:16:03.738 05:10:55 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 01:16:03.738 05:10:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 01:16:03.738 05:10:55 json_config -- common/autotest_common.sh@10 -- # set +x 01:16:03.738 05:10:55 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 01:16:03.738 05:10:55 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 01:16:03.738 05:10:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 01:16:03.995 MallocBdevForConfigChangeCheck 01:16:03.995 05:10:55 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 01:16:03.995 05:10:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 01:16:03.995 05:10:55 json_config -- common/autotest_common.sh@10 -- # set +x 01:16:03.995 05:10:55 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 01:16:03.995 05:10:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 01:16:04.561 INFO: shutting down applications... 01:16:04.561 05:10:55 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 01:16:04.561 05:10:55 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 01:16:04.561 05:10:55 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 01:16:04.561 05:10:55 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 01:16:04.561 05:10:55 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 01:16:04.819 Calling clear_iscsi_subsystem 01:16:04.819 Calling clear_nvmf_subsystem 01:16:04.819 Calling clear_nbd_subsystem 01:16:04.819 Calling clear_ublk_subsystem 01:16:04.819 Calling clear_vhost_blk_subsystem 01:16:04.819 Calling clear_vhost_scsi_subsystem 01:16:04.819 Calling clear_bdev_subsystem 01:16:04.819 05:10:56 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 01:16:04.819 05:10:56 json_config -- json_config/json_config.sh@350 -- # count=100 01:16:04.819 05:10:56 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 01:16:04.819 05:10:56 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 01:16:04.819 05:10:56 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 01:16:04.819 05:10:56 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 01:16:05.077 05:10:56 json_config -- json_config/json_config.sh@352 -- # break 01:16:05.077 05:10:56 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 01:16:05.077 05:10:56 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 01:16:05.077 05:10:56 json_config -- json_config/common.sh@31 -- # local app=target 01:16:05.077 05:10:56 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 01:16:05.077 05:10:56 json_config -- json_config/common.sh@35 -- # [[ -n 58403 ]] 01:16:05.077 05:10:56 json_config -- json_config/common.sh@38 -- # kill -SIGINT 58403 01:16:05.077 05:10:56 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 01:16:05.077 05:10:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 01:16:05.077 05:10:56 json_config -- json_config/common.sh@41 -- # kill -0 58403 01:16:05.077 05:10:56 json_config -- json_config/common.sh@45 -- # sleep 0.5 01:16:05.642 05:10:57 json_config -- json_config/common.sh@40 -- # (( i++ )) 01:16:05.642 05:10:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 01:16:05.642 05:10:57 json_config -- json_config/common.sh@41 -- # kill -0 58403 01:16:05.642 05:10:57 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 01:16:05.642 SPDK target shutdown done 01:16:05.642 INFO: relaunching applications... 01:16:05.643 05:10:57 json_config -- json_config/common.sh@43 -- # break 01:16:05.643 05:10:57 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 01:16:05.643 05:10:57 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 01:16:05.643 05:10:57 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 01:16:05.643 05:10:57 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:16:05.643 05:10:57 json_config -- json_config/common.sh@9 -- # local app=target 01:16:05.643 05:10:57 json_config -- json_config/common.sh@10 -- # shift 01:16:05.643 05:10:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 01:16:05.643 05:10:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 01:16:05.643 05:10:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 01:16:05.643 05:10:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:16:05.643 05:10:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:16:05.643 05:10:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58677 01:16:05.643 05:10:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 01:16:05.643 05:10:57 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:16:05.643 Waiting for target to run... 01:16:05.643 05:10:57 json_config -- json_config/common.sh@25 -- # waitforlisten 58677 /var/tmp/spdk_tgt.sock 01:16:05.643 05:10:57 json_config -- common/autotest_common.sh@835 -- # '[' -z 58677 ']' 01:16:05.643 05:10:57 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 01:16:05.643 05:10:57 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:05.643 05:10:57 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 01:16:05.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 01:16:05.643 05:10:57 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:05.643 05:10:57 json_config -- common/autotest_common.sh@10 -- # set +x 01:16:05.643 [2024-12-09 05:10:57.106025] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:05.643 [2024-12-09 05:10:57.106090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58677 ] 01:16:05.922 [2024-12-09 05:10:57.386373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:05.922 [2024-12-09 05:10:57.414873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:06.179 [2024-12-09 05:10:57.750669] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:16:06.179 [2024-12-09 05:10:57.782700] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:16:06.436 01:16:06.436 INFO: Checking if target configuration is the same... 01:16:06.436 05:10:57 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:06.436 05:10:57 json_config -- common/autotest_common.sh@868 -- # return 0 01:16:06.436 05:10:57 json_config -- json_config/common.sh@26 -- # echo '' 01:16:06.436 05:10:57 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 01:16:06.436 05:10:57 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 01:16:06.436 05:10:57 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:16:06.436 05:10:57 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 01:16:06.436 05:10:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 01:16:06.436 + '[' 2 -ne 2 ']' 01:16:06.436 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 01:16:06.436 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 01:16:06.436 + rootdir=/home/vagrant/spdk_repo/spdk 01:16:06.436 +++ basename /dev/fd/62 01:16:06.436 ++ mktemp /tmp/62.XXX 01:16:06.436 + tmp_file_1=/tmp/62.xLZ 01:16:06.436 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:16:06.436 ++ mktemp /tmp/spdk_tgt_config.json.XXX 01:16:06.436 + tmp_file_2=/tmp/spdk_tgt_config.json.WTP 01:16:06.436 + ret=0 01:16:06.436 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 01:16:07.001 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 01:16:07.001 + diff -u /tmp/62.xLZ /tmp/spdk_tgt_config.json.WTP 01:16:07.001 + echo 'INFO: JSON config files are the same' 01:16:07.001 INFO: JSON config files are the same 01:16:07.001 + rm /tmp/62.xLZ /tmp/spdk_tgt_config.json.WTP 01:16:07.001 + exit 0 01:16:07.001 INFO: changing configuration and checking if this can be detected... 01:16:07.001 05:10:58 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 01:16:07.001 05:10:58 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 01:16:07.001 05:10:58 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 01:16:07.001 05:10:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 01:16:07.001 05:10:58 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:16:07.001 05:10:58 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 01:16:07.001 05:10:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 01:16:07.001 + '[' 2 -ne 2 ']' 01:16:07.001 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 01:16:07.001 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 01:16:07.001 + rootdir=/home/vagrant/spdk_repo/spdk 01:16:07.001 +++ basename /dev/fd/62 01:16:07.001 ++ mktemp /tmp/62.XXX 01:16:07.001 + tmp_file_1=/tmp/62.JUg 01:16:07.001 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:16:07.001 ++ mktemp /tmp/spdk_tgt_config.json.XXX 01:16:07.001 + tmp_file_2=/tmp/spdk_tgt_config.json.ao3 01:16:07.001 + ret=0 01:16:07.001 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 01:16:07.568 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 01:16:07.568 + diff -u /tmp/62.JUg /tmp/spdk_tgt_config.json.ao3 01:16:07.568 + ret=1 01:16:07.568 + echo '=== Start of file: /tmp/62.JUg ===' 01:16:07.568 + cat /tmp/62.JUg 01:16:07.568 + echo '=== End of file: /tmp/62.JUg ===' 01:16:07.568 + echo '' 01:16:07.568 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ao3 ===' 01:16:07.568 + cat /tmp/spdk_tgt_config.json.ao3 01:16:07.568 + echo '=== End of file: /tmp/spdk_tgt_config.json.ao3 ===' 01:16:07.568 + echo '' 01:16:07.568 + rm /tmp/62.JUg /tmp/spdk_tgt_config.json.ao3 01:16:07.568 + exit 1 01:16:07.568 INFO: configuration change detected. 01:16:07.568 05:10:58 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 01:16:07.568 05:10:58 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 01:16:07.568 05:10:58 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 01:16:07.568 05:10:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 01:16:07.568 05:10:58 json_config -- common/autotest_common.sh@10 -- # set +x 01:16:07.568 05:10:58 json_config -- json_config/json_config.sh@314 -- # local ret=0 01:16:07.568 05:10:59 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 01:16:07.568 05:10:59 json_config -- json_config/json_config.sh@324 -- # [[ -n 58677 ]] 01:16:07.568 05:10:59 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 01:16:07.568 05:10:59 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 01:16:07.568 05:10:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 01:16:07.568 05:10:59 json_config -- common/autotest_common.sh@10 -- # set +x 01:16:07.569 05:10:59 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 01:16:07.569 05:10:59 json_config -- json_config/json_config.sh@200 -- # uname -s 01:16:07.569 05:10:59 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 01:16:07.569 05:10:59 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 01:16:07.569 05:10:59 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 01:16:07.569 05:10:59 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 01:16:07.569 05:10:59 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 01:16:07.569 05:10:59 json_config -- common/autotest_common.sh@10 -- # set +x 01:16:07.569 05:10:59 json_config -- json_config/json_config.sh@330 -- # killprocess 58677 01:16:07.569 05:10:59 json_config -- common/autotest_common.sh@954 -- # '[' -z 58677 ']' 01:16:07.569 05:10:59 json_config -- common/autotest_common.sh@958 -- # kill -0 58677 01:16:07.569 05:10:59 json_config -- common/autotest_common.sh@959 -- # uname 01:16:07.569 05:10:59 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:07.569 05:10:59 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58677 01:16:07.569 killing process with pid 58677 01:16:07.569 05:10:59 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:07.569 05:10:59 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:07.569 05:10:59 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58677' 01:16:07.569 05:10:59 json_config -- common/autotest_common.sh@973 -- # kill 58677 01:16:07.569 05:10:59 json_config -- common/autotest_common.sh@978 -- # wait 58677 01:16:07.826 05:10:59 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:16:07.826 05:10:59 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 01:16:07.826 05:10:59 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 01:16:07.826 05:10:59 json_config -- common/autotest_common.sh@10 -- # set +x 01:16:07.826 05:10:59 json_config -- json_config/json_config.sh@335 -- # return 0 01:16:07.826 INFO: Success 01:16:07.826 05:10:59 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 01:16:07.826 01:16:07.826 real 0m7.140s 01:16:07.826 user 0m9.897s 01:16:07.826 sys 0m1.296s 01:16:07.826 05:10:59 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:07.826 ************************************ 01:16:07.826 05:10:59 json_config -- common/autotest_common.sh@10 -- # set +x 01:16:07.826 END TEST json_config 01:16:07.826 ************************************ 01:16:07.826 05:10:59 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 01:16:07.826 05:10:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:07.826 05:10:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:07.826 05:10:59 -- common/autotest_common.sh@10 -- # set +x 01:16:07.826 ************************************ 01:16:07.826 START TEST json_config_extra_key 01:16:07.826 ************************************ 01:16:07.826 05:10:59 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 01:16:07.826 05:10:59 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:16:07.826 05:10:59 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 01:16:07.826 05:10:59 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:16:07.826 05:10:59 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@345 -- # : 1 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@368 -- # return 0 01:16:07.826 05:10:59 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:16:07.826 05:10:59 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:16:07.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:07.826 --rc genhtml_branch_coverage=1 01:16:07.826 --rc genhtml_function_coverage=1 01:16:07.826 --rc genhtml_legend=1 01:16:07.826 --rc geninfo_all_blocks=1 01:16:07.826 --rc geninfo_unexecuted_blocks=1 01:16:07.826 01:16:07.826 ' 01:16:07.826 05:10:59 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:16:07.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:07.826 --rc genhtml_branch_coverage=1 01:16:07.826 --rc genhtml_function_coverage=1 01:16:07.826 --rc genhtml_legend=1 01:16:07.826 --rc geninfo_all_blocks=1 01:16:07.826 --rc geninfo_unexecuted_blocks=1 01:16:07.826 01:16:07.826 ' 01:16:07.826 05:10:59 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:16:07.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:07.826 --rc genhtml_branch_coverage=1 01:16:07.826 --rc genhtml_function_coverage=1 01:16:07.826 --rc genhtml_legend=1 01:16:07.826 --rc geninfo_all_blocks=1 01:16:07.826 --rc geninfo_unexecuted_blocks=1 01:16:07.826 01:16:07.826 ' 01:16:07.826 05:10:59 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:16:07.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:07.826 --rc genhtml_branch_coverage=1 01:16:07.826 --rc genhtml_function_coverage=1 01:16:07.826 --rc genhtml_legend=1 01:16:07.826 --rc geninfo_all_blocks=1 01:16:07.826 --rc geninfo_unexecuted_blocks=1 01:16:07.826 01:16:07.826 ' 01:16:07.826 05:10:59 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:16:07.826 05:10:59 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 01:16:07.826 05:10:59 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:16:07.826 05:10:59 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:16:07.826 05:10:59 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:16:07.826 05:10:59 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:16:07.826 05:10:59 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:16:07.826 05:10:59 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:16:07.826 05:10:59 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:16:07.826 05:10:59 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:16:07.826 05:10:59 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:16:07.826 05:10:59 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:16:07.826 05:10:59 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:16:07.826 05:10:59 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:16:07.826 05:10:59 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:16:07.826 05:10:59 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:16:07.826 05:10:59 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 01:16:07.826 05:10:59 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:16:07.826 05:10:59 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 01:16:07.826 05:10:59 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:16:07.827 05:10:59 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:16:07.827 05:10:59 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:16:07.827 05:10:59 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:07.827 05:10:59 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:07.827 05:10:59 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:07.827 05:10:59 json_config_extra_key -- paths/export.sh@5 -- # export PATH 01:16:07.827 05:10:59 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:07.827 05:10:59 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 01:16:07.827 05:10:59 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:16:07.827 05:10:59 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:16:07.827 05:10:59 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:16:07.827 05:10:59 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:16:07.827 05:10:59 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:16:07.827 05:10:59 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:16:07.827 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:16:07.827 05:10:59 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:16:07.827 05:10:59 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:16:07.827 05:10:59 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 01:16:07.827 05:10:59 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 01:16:07.827 05:10:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 01:16:07.827 05:10:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 01:16:07.827 05:10:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 01:16:07.827 05:10:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 01:16:07.827 05:10:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 01:16:07.827 05:10:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 01:16:07.827 05:10:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 01:16:07.827 05:10:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 01:16:07.827 05:10:59 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 01:16:07.827 05:10:59 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 01:16:07.827 INFO: launching applications... 01:16:07.827 05:10:59 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 01:16:07.827 05:10:59 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 01:16:07.827 05:10:59 json_config_extra_key -- json_config/common.sh@10 -- # shift 01:16:07.827 05:10:59 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 01:16:07.827 05:10:59 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 01:16:07.827 05:10:59 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 01:16:07.827 05:10:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:16:07.827 05:10:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:16:07.827 05:10:59 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58850 01:16:07.827 05:10:59 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 01:16:07.827 Waiting for target to run... 01:16:07.827 05:10:59 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58850 /var/tmp/spdk_tgt.sock 01:16:07.827 05:10:59 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58850 ']' 01:16:07.827 05:10:59 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 01:16:07.827 05:10:59 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 01:16:07.827 05:10:59 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:07.827 05:10:59 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 01:16:07.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 01:16:07.827 05:10:59 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:07.827 05:10:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 01:16:08.083 [2024-12-09 05:10:59.472697] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:08.083 [2024-12-09 05:10:59.472922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58850 ] 01:16:08.340 [2024-12-09 05:10:59.810683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:08.340 [2024-12-09 05:10:59.834945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:08.906 05:11:00 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:08.906 05:11:00 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 01:16:08.907 05:11:00 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 01:16:08.907 01:16:08.907 INFO: shutting down applications... 01:16:08.907 05:11:00 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 01:16:08.907 05:11:00 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 01:16:08.907 05:11:00 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 01:16:08.907 05:11:00 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 01:16:08.907 05:11:00 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58850 ]] 01:16:08.907 05:11:00 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58850 01:16:08.907 05:11:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 01:16:08.907 05:11:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:16:08.907 05:11:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58850 01:16:08.907 05:11:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 01:16:09.473 05:11:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 01:16:09.473 05:11:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:16:09.473 05:11:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58850 01:16:09.473 05:11:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 01:16:09.473 05:11:00 json_config_extra_key -- json_config/common.sh@43 -- # break 01:16:09.473 05:11:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 01:16:09.473 05:11:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 01:16:09.473 SPDK target shutdown done 01:16:09.473 05:11:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 01:16:09.473 Success 01:16:09.473 01:16:09.473 real 0m1.563s 01:16:09.473 user 0m1.271s 01:16:09.473 sys 0m0.337s 01:16:09.473 05:11:00 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:09.473 05:11:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 01:16:09.473 ************************************ 01:16:09.473 END TEST json_config_extra_key 01:16:09.473 ************************************ 01:16:09.473 05:11:00 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 01:16:09.473 05:11:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:09.473 05:11:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:09.473 05:11:00 -- common/autotest_common.sh@10 -- # set +x 01:16:09.473 ************************************ 01:16:09.473 START TEST alias_rpc 01:16:09.474 ************************************ 01:16:09.474 05:11:00 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 01:16:09.474 * Looking for test storage... 01:16:09.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 01:16:09.474 05:11:00 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:16:09.474 05:11:00 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:16:09.474 05:11:00 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 01:16:09.474 05:11:01 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@345 -- # : 1 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@365 -- # decimal 1 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@353 -- # local d=1 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@355 -- # echo 1 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@366 -- # decimal 2 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@353 -- # local d=2 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@355 -- # echo 2 01:16:09.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:16:09.474 05:11:01 alias_rpc -- scripts/common.sh@368 -- # return 0 01:16:09.474 05:11:01 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:16:09.474 05:11:01 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:16:09.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:09.474 --rc genhtml_branch_coverage=1 01:16:09.474 --rc genhtml_function_coverage=1 01:16:09.474 --rc genhtml_legend=1 01:16:09.474 --rc geninfo_all_blocks=1 01:16:09.474 --rc geninfo_unexecuted_blocks=1 01:16:09.474 01:16:09.474 ' 01:16:09.474 05:11:01 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:16:09.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:09.474 --rc genhtml_branch_coverage=1 01:16:09.474 --rc genhtml_function_coverage=1 01:16:09.474 --rc genhtml_legend=1 01:16:09.474 --rc geninfo_all_blocks=1 01:16:09.474 --rc geninfo_unexecuted_blocks=1 01:16:09.474 01:16:09.474 ' 01:16:09.474 05:11:01 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:16:09.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:09.474 --rc genhtml_branch_coverage=1 01:16:09.474 --rc genhtml_function_coverage=1 01:16:09.474 --rc genhtml_legend=1 01:16:09.474 --rc geninfo_all_blocks=1 01:16:09.474 --rc geninfo_unexecuted_blocks=1 01:16:09.474 01:16:09.474 ' 01:16:09.474 05:11:01 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:16:09.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:09.474 --rc genhtml_branch_coverage=1 01:16:09.474 --rc genhtml_function_coverage=1 01:16:09.474 --rc genhtml_legend=1 01:16:09.474 --rc geninfo_all_blocks=1 01:16:09.474 --rc geninfo_unexecuted_blocks=1 01:16:09.474 01:16:09.474 ' 01:16:09.474 05:11:01 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 01:16:09.474 05:11:01 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58940 01:16:09.474 05:11:01 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58940 01:16:09.474 05:11:01 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58940 ']' 01:16:09.474 05:11:01 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:09.474 05:11:01 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:09.474 05:11:01 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:09.474 05:11:01 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:09.474 05:11:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 01:16:09.474 05:11:01 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:16:09.474 [2024-12-09 05:11:01.068679] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:09.474 [2024-12-09 05:11:01.068848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58940 ] 01:16:09.733 [2024-12-09 05:11:01.209824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:09.733 [2024-12-09 05:11:01.245059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:10.666 05:11:01 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:10.666 05:11:01 alias_rpc -- common/autotest_common.sh@868 -- # return 0 01:16:10.666 05:11:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 01:16:10.666 05:11:02 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58940 01:16:10.666 05:11:02 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58940 ']' 01:16:10.666 05:11:02 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58940 01:16:10.666 05:11:02 alias_rpc -- common/autotest_common.sh@959 -- # uname 01:16:10.666 05:11:02 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:10.666 05:11:02 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58940 01:16:10.666 05:11:02 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:10.666 05:11:02 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:10.666 killing process with pid 58940 01:16:10.666 05:11:02 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58940' 01:16:10.666 05:11:02 alias_rpc -- common/autotest_common.sh@973 -- # kill 58940 01:16:10.666 05:11:02 alias_rpc -- common/autotest_common.sh@978 -- # wait 58940 01:16:10.924 ************************************ 01:16:10.924 END TEST alias_rpc 01:16:10.924 ************************************ 01:16:10.924 01:16:10.924 real 0m1.554s 01:16:10.924 user 0m1.782s 01:16:10.924 sys 0m0.294s 01:16:10.924 05:11:02 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:10.924 05:11:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 01:16:10.924 05:11:02 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 01:16:10.924 05:11:02 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 01:16:10.924 05:11:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:10.924 05:11:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:10.924 05:11:02 -- common/autotest_common.sh@10 -- # set +x 01:16:10.924 ************************************ 01:16:10.924 START TEST dpdk_mem_utility 01:16:10.924 ************************************ 01:16:10.924 05:11:02 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 01:16:10.924 * Looking for test storage... 01:16:11.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 01:16:11.183 05:11:02 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:16:11.183 05:11:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:16:11.183 05:11:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 01:16:11.183 05:11:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:16:11.183 05:11:02 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 01:16:11.183 05:11:02 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:16:11.183 05:11:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:16:11.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:11.183 --rc genhtml_branch_coverage=1 01:16:11.183 --rc genhtml_function_coverage=1 01:16:11.183 --rc genhtml_legend=1 01:16:11.183 --rc geninfo_all_blocks=1 01:16:11.183 --rc geninfo_unexecuted_blocks=1 01:16:11.183 01:16:11.183 ' 01:16:11.183 05:11:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:16:11.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:11.183 --rc genhtml_branch_coverage=1 01:16:11.183 --rc genhtml_function_coverage=1 01:16:11.183 --rc genhtml_legend=1 01:16:11.183 --rc geninfo_all_blocks=1 01:16:11.183 --rc geninfo_unexecuted_blocks=1 01:16:11.183 01:16:11.183 ' 01:16:11.183 05:11:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:16:11.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:11.183 --rc genhtml_branch_coverage=1 01:16:11.183 --rc genhtml_function_coverage=1 01:16:11.183 --rc genhtml_legend=1 01:16:11.183 --rc geninfo_all_blocks=1 01:16:11.183 --rc geninfo_unexecuted_blocks=1 01:16:11.183 01:16:11.183 ' 01:16:11.183 05:11:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:16:11.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:11.183 --rc genhtml_branch_coverage=1 01:16:11.183 --rc genhtml_function_coverage=1 01:16:11.183 --rc genhtml_legend=1 01:16:11.183 --rc geninfo_all_blocks=1 01:16:11.183 --rc geninfo_unexecuted_blocks=1 01:16:11.183 01:16:11.183 ' 01:16:11.183 05:11:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 01:16:11.183 05:11:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59035 01:16:11.183 05:11:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59035 01:16:11.183 05:11:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59035 ']' 01:16:11.183 05:11:02 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:11.184 05:11:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:11.184 05:11:02 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:11.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:11.184 05:11:02 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:11.184 05:11:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 01:16:11.184 05:11:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:16:11.184 [2024-12-09 05:11:02.665127] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:11.184 [2024-12-09 05:11:02.665194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59035 ] 01:16:11.184 [2024-12-09 05:11:02.797838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:11.442 [2024-12-09 05:11:02.834182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:12.007 05:11:03 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:12.007 05:11:03 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 01:16:12.007 05:11:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 01:16:12.008 05:11:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 01:16:12.008 05:11:03 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:12.008 05:11:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 01:16:12.008 { 01:16:12.008 "filename": "/tmp/spdk_mem_dump.txt" 01:16:12.008 } 01:16:12.008 05:11:03 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:12.008 05:11:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 01:16:12.008 DPDK memory size 818.000000 MiB in 1 heap(s) 01:16:12.008 1 heaps totaling size 818.000000 MiB 01:16:12.008 size: 818.000000 MiB heap id: 0 01:16:12.008 end heaps---------- 01:16:12.008 9 mempools totaling size 603.782043 MiB 01:16:12.008 size: 212.674988 MiB name: PDU_immediate_data_Pool 01:16:12.008 size: 158.602051 MiB name: PDU_data_out_Pool 01:16:12.008 size: 100.555481 MiB name: bdev_io_59035 01:16:12.008 size: 50.003479 MiB name: msgpool_59035 01:16:12.008 size: 36.509338 MiB name: fsdev_io_59035 01:16:12.008 size: 21.763794 MiB name: PDU_Pool 01:16:12.008 size: 19.513306 MiB name: SCSI_TASK_Pool 01:16:12.008 size: 4.133484 MiB name: evtpool_59035 01:16:12.008 size: 0.026123 MiB name: Session_Pool 01:16:12.008 end mempools------- 01:16:12.008 6 memzones totaling size 4.142822 MiB 01:16:12.008 size: 1.000366 MiB name: RG_ring_0_59035 01:16:12.008 size: 1.000366 MiB name: RG_ring_1_59035 01:16:12.008 size: 1.000366 MiB name: RG_ring_4_59035 01:16:12.008 size: 1.000366 MiB name: RG_ring_5_59035 01:16:12.008 size: 0.125366 MiB name: RG_ring_2_59035 01:16:12.008 size: 0.015991 MiB name: RG_ring_3_59035 01:16:12.008 end memzones------- 01:16:12.008 05:11:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 01:16:12.267 heap id: 0 total size: 818.000000 MiB number of busy elements: 224 number of free elements: 15 01:16:12.267 list of free elements. size: 10.819519 MiB 01:16:12.267 element at address: 0x200019200000 with size: 0.999878 MiB 01:16:12.267 element at address: 0x200019400000 with size: 0.999878 MiB 01:16:12.267 element at address: 0x200000400000 with size: 0.996155 MiB 01:16:12.267 element at address: 0x200032000000 with size: 0.994446 MiB 01:16:12.267 element at address: 0x200006400000 with size: 0.959839 MiB 01:16:12.267 element at address: 0x200012c00000 with size: 0.944275 MiB 01:16:12.267 element at address: 0x200019600000 with size: 0.936584 MiB 01:16:12.267 element at address: 0x200000200000 with size: 0.717346 MiB 01:16:12.267 element at address: 0x20001ae00000 with size: 0.573364 MiB 01:16:12.267 element at address: 0x200000c00000 with size: 0.490662 MiB 01:16:12.267 element at address: 0x20000a600000 with size: 0.489807 MiB 01:16:12.267 element at address: 0x200019800000 with size: 0.485657 MiB 01:16:12.267 element at address: 0x200003e00000 with size: 0.480286 MiB 01:16:12.267 element at address: 0x200028200000 with size: 0.397949 MiB 01:16:12.267 element at address: 0x200000800000 with size: 0.353394 MiB 01:16:12.267 list of standard malloc elements. size: 199.251587 MiB 01:16:12.267 element at address: 0x20000a7fff80 with size: 132.000122 MiB 01:16:12.267 element at address: 0x2000065fff80 with size: 64.000122 MiB 01:16:12.267 element at address: 0x2000192fff80 with size: 1.000122 MiB 01:16:12.267 element at address: 0x2000194fff80 with size: 1.000122 MiB 01:16:12.267 element at address: 0x2000196fff80 with size: 1.000122 MiB 01:16:12.267 element at address: 0x2000003d9f00 with size: 0.140747 MiB 01:16:12.267 element at address: 0x2000196eff00 with size: 0.062622 MiB 01:16:12.267 element at address: 0x2000003fdf80 with size: 0.007935 MiB 01:16:12.267 element at address: 0x2000196efdc0 with size: 0.000305 MiB 01:16:12.267 element at address: 0x2000002d7c40 with size: 0.000183 MiB 01:16:12.267 element at address: 0x2000003d9e40 with size: 0.000183 MiB 01:16:12.267 element at address: 0x2000004ff040 with size: 0.000183 MiB 01:16:12.267 element at address: 0x2000004ff100 with size: 0.000183 MiB 01:16:12.267 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 01:16:12.267 element at address: 0x2000004ff280 with size: 0.000183 MiB 01:16:12.267 element at address: 0x2000004ff340 with size: 0.000183 MiB 01:16:12.267 element at address: 0x2000004ff400 with size: 0.000183 MiB 01:16:12.267 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 01:16:12.267 element at address: 0x2000004ff580 with size: 0.000183 MiB 01:16:12.267 element at address: 0x2000004ff640 with size: 0.000183 MiB 01:16:12.267 element at address: 0x2000004ff700 with size: 0.000183 MiB 01:16:12.267 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 01:16:12.267 element at address: 0x2000004ff880 with size: 0.000183 MiB 01:16:12.267 element at address: 0x2000004ff940 with size: 0.000183 MiB 01:16:12.267 element at address: 0x2000004ffa00 with size: 0.000183 MiB 01:16:12.267 element at address: 0x2000004ffac0 with size: 0.000183 MiB 01:16:12.267 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 01:16:12.267 element at address: 0x2000004ffd80 with size: 0.000183 MiB 01:16:12.267 element at address: 0x2000004ffe40 with size: 0.000183 MiB 01:16:12.267 element at address: 0x20000085a780 with size: 0.000183 MiB 01:16:12.267 element at address: 0x20000085a980 with size: 0.000183 MiB 01:16:12.267 element at address: 0x20000085ec40 with size: 0.000183 MiB 01:16:12.267 element at address: 0x20000087ef00 with size: 0.000183 MiB 01:16:12.267 element at address: 0x20000087efc0 with size: 0.000183 MiB 01:16:12.267 element at address: 0x20000087f080 with size: 0.000183 MiB 01:16:12.267 element at address: 0x20000087f140 with size: 0.000183 MiB 01:16:12.267 element at address: 0x20000087f200 with size: 0.000183 MiB 01:16:12.267 element at address: 0x20000087f2c0 with size: 0.000183 MiB 01:16:12.267 element at address: 0x20000087f380 with size: 0.000183 MiB 01:16:12.267 element at address: 0x20000087f440 with size: 0.000183 MiB 01:16:12.267 element at address: 0x20000087f500 with size: 0.000183 MiB 01:16:12.267 element at address: 0x20000087f5c0 with size: 0.000183 MiB 01:16:12.267 element at address: 0x20000087f680 with size: 0.000183 MiB 01:16:12.267 element at address: 0x2000008ff940 with size: 0.000183 MiB 01:16:12.267 element at address: 0x2000008ffb40 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7da80 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7db40 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7dc00 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7dd80 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7de40 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7df00 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7e080 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7e140 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7e200 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7e380 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7e440 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7e500 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7e680 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7e740 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7e800 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7e980 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7ea40 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7eb00 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7ec80 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000c7ed40 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000cff000 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200000cff0c0 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200003e7af40 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200003e7b000 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200003e7b180 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200003e7b240 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200003e7b300 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 01:16:12.267 element at address: 0x200003e7b480 with size: 0.000183 MiB 01:16:12.268 element at address: 0x200003e7b540 with size: 0.000183 MiB 01:16:12.268 element at address: 0x200003e7b600 with size: 0.000183 MiB 01:16:12.268 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x200003efb980 with size: 0.000183 MiB 01:16:12.268 element at address: 0x2000064fdd80 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20000a67d640 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20000a67d700 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20000a67d880 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20000a67d940 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20000a67da00 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20000a67dac0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 01:16:12.268 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x2000196efc40 with size: 0.000183 MiB 01:16:12.268 element at address: 0x2000196efd00 with size: 0.000183 MiB 01:16:12.268 element at address: 0x2000198bc740 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae92c80 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae92d40 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae92e00 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae92f80 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae93040 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae93100 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae931c0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae93280 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae93340 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae93400 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae934c0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae93580 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae93640 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae93700 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae937c0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae93880 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae93940 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae93a00 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae93b80 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae93c40 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae93d00 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae93e80 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae93f40 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae94000 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae940c0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae94180 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae94240 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae94300 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae943c0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae94480 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae94540 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae94600 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae946c0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae94780 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae94840 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae94900 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae949c0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae94a80 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae94b40 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae94c00 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae94d80 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae94e40 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae94f00 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae95080 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae95140 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae95200 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae952c0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae95380 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20001ae95440 with size: 0.000183 MiB 01:16:12.268 element at address: 0x200028265e00 with size: 0.000183 MiB 01:16:12.268 element at address: 0x200028265ec0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826cac0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826ccc0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826cd80 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826ce40 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826cf00 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826cfc0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826d080 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826d140 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826d200 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826d2c0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826d380 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826d440 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826d500 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826d5c0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826d680 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826d740 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826d800 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826d8c0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826d980 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826da40 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826db00 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826dbc0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826dc80 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826dd40 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826de00 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826dec0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826df80 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826e040 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826e100 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826e1c0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826e280 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826e340 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826e400 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826e4c0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826e580 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826e640 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826e700 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826e7c0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826e880 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826e940 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826ea00 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826eac0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826eb80 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826ec40 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826ed00 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826edc0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826ee80 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826ef40 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826f000 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826f0c0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826f180 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826f240 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826f300 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826f3c0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826f480 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826f540 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826f600 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826f6c0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826f780 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826f840 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826f900 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826f9c0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826fa80 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826fb40 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826fc00 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826fcc0 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826fd80 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826fe40 with size: 0.000183 MiB 01:16:12.268 element at address: 0x20002826ff00 with size: 0.000183 MiB 01:16:12.268 list of memzone associated elements. size: 607.928894 MiB 01:16:12.268 element at address: 0x20001ae95500 with size: 211.416748 MiB 01:16:12.268 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 01:16:12.268 element at address: 0x20002826ffc0 with size: 157.562561 MiB 01:16:12.268 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 01:16:12.268 element at address: 0x200012df1e80 with size: 100.055054 MiB 01:16:12.268 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59035_0 01:16:12.268 element at address: 0x200000dff380 with size: 48.003052 MiB 01:16:12.268 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59035_0 01:16:12.268 element at address: 0x200003ffdb80 with size: 36.008911 MiB 01:16:12.268 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59035_0 01:16:12.268 element at address: 0x2000199be940 with size: 20.255554 MiB 01:16:12.269 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 01:16:12.269 element at address: 0x2000321feb40 with size: 18.005066 MiB 01:16:12.269 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 01:16:12.269 element at address: 0x2000004fff00 with size: 3.000244 MiB 01:16:12.269 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59035_0 01:16:12.269 element at address: 0x2000009ffe00 with size: 2.000488 MiB 01:16:12.269 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59035 01:16:12.269 element at address: 0x2000002d7d00 with size: 1.008118 MiB 01:16:12.269 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59035 01:16:12.269 element at address: 0x20000a6fde40 with size: 1.008118 MiB 01:16:12.269 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 01:16:12.269 element at address: 0x2000198bc800 with size: 1.008118 MiB 01:16:12.269 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 01:16:12.269 element at address: 0x2000064fde40 with size: 1.008118 MiB 01:16:12.269 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 01:16:12.269 element at address: 0x200003efba40 with size: 1.008118 MiB 01:16:12.269 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 01:16:12.269 element at address: 0x200000cff180 with size: 1.000488 MiB 01:16:12.269 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59035 01:16:12.269 element at address: 0x2000008ffc00 with size: 1.000488 MiB 01:16:12.269 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59035 01:16:12.269 element at address: 0x200012cf1c80 with size: 1.000488 MiB 01:16:12.269 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59035 01:16:12.269 element at address: 0x2000320fe940 with size: 1.000488 MiB 01:16:12.269 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59035 01:16:12.269 element at address: 0x20000087f740 with size: 0.500488 MiB 01:16:12.269 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59035 01:16:12.269 element at address: 0x200000c7ee00 with size: 0.500488 MiB 01:16:12.269 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59035 01:16:12.269 element at address: 0x20000a67db80 with size: 0.500488 MiB 01:16:12.269 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 01:16:12.269 element at address: 0x200003e7b780 with size: 0.500488 MiB 01:16:12.269 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 01:16:12.269 element at address: 0x20001987c540 with size: 0.250488 MiB 01:16:12.269 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 01:16:12.269 element at address: 0x2000002b7a40 with size: 0.125488 MiB 01:16:12.269 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59035 01:16:12.269 element at address: 0x20000085ed00 with size: 0.125488 MiB 01:16:12.269 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59035 01:16:12.269 element at address: 0x2000064f5b80 with size: 0.031738 MiB 01:16:12.269 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 01:16:12.269 element at address: 0x200028265f80 with size: 0.023743 MiB 01:16:12.269 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 01:16:12.269 element at address: 0x20000085aa40 with size: 0.016113 MiB 01:16:12.269 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59035 01:16:12.269 element at address: 0x20002826c0c0 with size: 0.002441 MiB 01:16:12.269 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 01:16:12.269 element at address: 0x2000004ffb80 with size: 0.000305 MiB 01:16:12.269 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59035 01:16:12.269 element at address: 0x2000008ffa00 with size: 0.000305 MiB 01:16:12.269 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59035 01:16:12.269 element at address: 0x20000085a840 with size: 0.000305 MiB 01:16:12.269 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59035 01:16:12.269 element at address: 0x20002826cb80 with size: 0.000305 MiB 01:16:12.269 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 01:16:12.269 05:11:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 01:16:12.269 05:11:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59035 01:16:12.269 05:11:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59035 ']' 01:16:12.269 05:11:03 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59035 01:16:12.269 05:11:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 01:16:12.269 05:11:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:12.269 05:11:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59035 01:16:12.269 killing process with pid 59035 01:16:12.269 05:11:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:12.269 05:11:03 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:12.269 05:11:03 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59035' 01:16:12.269 05:11:03 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59035 01:16:12.269 05:11:03 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59035 01:16:12.528 01:16:12.528 real 0m1.453s 01:16:12.528 user 0m1.590s 01:16:12.528 sys 0m0.296s 01:16:12.528 05:11:03 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:12.528 ************************************ 01:16:12.528 END TEST dpdk_mem_utility 01:16:12.528 ************************************ 01:16:12.528 05:11:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 01:16:12.528 05:11:03 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 01:16:12.528 05:11:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:12.528 05:11:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:12.528 05:11:03 -- common/autotest_common.sh@10 -- # set +x 01:16:12.528 ************************************ 01:16:12.528 START TEST event 01:16:12.528 ************************************ 01:16:12.528 05:11:03 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 01:16:12.528 * Looking for test storage... 01:16:12.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 01:16:12.528 05:11:04 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:16:12.528 05:11:04 event -- common/autotest_common.sh@1693 -- # lcov --version 01:16:12.528 05:11:04 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:16:12.528 05:11:04 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:16:12.528 05:11:04 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:16:12.528 05:11:04 event -- scripts/common.sh@333 -- # local ver1 ver1_l 01:16:12.528 05:11:04 event -- scripts/common.sh@334 -- # local ver2 ver2_l 01:16:12.528 05:11:04 event -- scripts/common.sh@336 -- # IFS=.-: 01:16:12.528 05:11:04 event -- scripts/common.sh@336 -- # read -ra ver1 01:16:12.528 05:11:04 event -- scripts/common.sh@337 -- # IFS=.-: 01:16:12.528 05:11:04 event -- scripts/common.sh@337 -- # read -ra ver2 01:16:12.528 05:11:04 event -- scripts/common.sh@338 -- # local 'op=<' 01:16:12.528 05:11:04 event -- scripts/common.sh@340 -- # ver1_l=2 01:16:12.528 05:11:04 event -- scripts/common.sh@341 -- # ver2_l=1 01:16:12.528 05:11:04 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:16:12.528 05:11:04 event -- scripts/common.sh@344 -- # case "$op" in 01:16:12.528 05:11:04 event -- scripts/common.sh@345 -- # : 1 01:16:12.528 05:11:04 event -- scripts/common.sh@364 -- # (( v = 0 )) 01:16:12.528 05:11:04 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:16:12.528 05:11:04 event -- scripts/common.sh@365 -- # decimal 1 01:16:12.528 05:11:04 event -- scripts/common.sh@353 -- # local d=1 01:16:12.528 05:11:04 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:16:12.528 05:11:04 event -- scripts/common.sh@355 -- # echo 1 01:16:12.528 05:11:04 event -- scripts/common.sh@365 -- # ver1[v]=1 01:16:12.528 05:11:04 event -- scripts/common.sh@366 -- # decimal 2 01:16:12.528 05:11:04 event -- scripts/common.sh@353 -- # local d=2 01:16:12.528 05:11:04 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:16:12.528 05:11:04 event -- scripts/common.sh@355 -- # echo 2 01:16:12.528 05:11:04 event -- scripts/common.sh@366 -- # ver2[v]=2 01:16:12.528 05:11:04 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:16:12.528 05:11:04 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:16:12.528 05:11:04 event -- scripts/common.sh@368 -- # return 0 01:16:12.528 05:11:04 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:16:12.528 05:11:04 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:16:12.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:12.528 --rc genhtml_branch_coverage=1 01:16:12.528 --rc genhtml_function_coverage=1 01:16:12.528 --rc genhtml_legend=1 01:16:12.528 --rc geninfo_all_blocks=1 01:16:12.528 --rc geninfo_unexecuted_blocks=1 01:16:12.528 01:16:12.528 ' 01:16:12.528 05:11:04 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:16:12.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:12.528 --rc genhtml_branch_coverage=1 01:16:12.528 --rc genhtml_function_coverage=1 01:16:12.528 --rc genhtml_legend=1 01:16:12.528 --rc geninfo_all_blocks=1 01:16:12.528 --rc geninfo_unexecuted_blocks=1 01:16:12.528 01:16:12.528 ' 01:16:12.528 05:11:04 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:16:12.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:12.528 --rc genhtml_branch_coverage=1 01:16:12.528 --rc genhtml_function_coverage=1 01:16:12.528 --rc genhtml_legend=1 01:16:12.528 --rc geninfo_all_blocks=1 01:16:12.528 --rc geninfo_unexecuted_blocks=1 01:16:12.528 01:16:12.528 ' 01:16:12.528 05:11:04 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:16:12.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:12.528 --rc genhtml_branch_coverage=1 01:16:12.528 --rc genhtml_function_coverage=1 01:16:12.528 --rc genhtml_legend=1 01:16:12.528 --rc geninfo_all_blocks=1 01:16:12.528 --rc geninfo_unexecuted_blocks=1 01:16:12.528 01:16:12.528 ' 01:16:12.528 05:11:04 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 01:16:12.528 05:11:04 event -- bdev/nbd_common.sh@6 -- # set -e 01:16:12.528 05:11:04 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 01:16:12.528 05:11:04 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 01:16:12.528 05:11:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:12.528 05:11:04 event -- common/autotest_common.sh@10 -- # set +x 01:16:12.528 ************************************ 01:16:12.528 START TEST event_perf 01:16:12.528 ************************************ 01:16:12.528 05:11:04 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 01:16:12.528 Running I/O for 1 seconds...[2024-12-09 05:11:04.130761] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:12.528 [2024-12-09 05:11:04.130811] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59132 ] 01:16:12.839 [2024-12-09 05:11:04.270747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:16:12.839 [2024-12-09 05:11:04.308508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:16:12.839 [2024-12-09 05:11:04.308656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:16:12.839 [2024-12-09 05:11:04.309693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:16:12.839 [2024-12-09 05:11:04.309695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:13.771 Running I/O for 1 seconds... 01:16:13.771 lcore 0: 182946 01:16:13.771 lcore 1: 182946 01:16:13.771 lcore 2: 182946 01:16:13.771 lcore 3: 182949 01:16:13.771 done. 01:16:14.028 01:16:14.028 real 0m1.273s 01:16:14.028 user 0m4.124s 01:16:14.029 sys 0m0.031s 01:16:14.029 05:11:05 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:14.029 05:11:05 event.event_perf -- common/autotest_common.sh@10 -- # set +x 01:16:14.029 ************************************ 01:16:14.029 END TEST event_perf 01:16:14.029 ************************************ 01:16:14.029 05:11:05 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 01:16:14.029 05:11:05 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:16:14.029 05:11:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:14.029 05:11:05 event -- common/autotest_common.sh@10 -- # set +x 01:16:14.029 ************************************ 01:16:14.029 START TEST event_reactor 01:16:14.029 ************************************ 01:16:14.029 05:11:05 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 01:16:14.029 [2024-12-09 05:11:05.446176] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:14.029 [2024-12-09 05:11:05.446240] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59165 ] 01:16:14.029 [2024-12-09 05:11:05.586437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:14.029 [2024-12-09 05:11:05.620841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:15.405 test_start 01:16:15.405 oneshot 01:16:15.405 tick 100 01:16:15.405 tick 100 01:16:15.405 tick 250 01:16:15.405 tick 100 01:16:15.405 tick 100 01:16:15.405 tick 250 01:16:15.405 tick 100 01:16:15.405 tick 500 01:16:15.405 tick 100 01:16:15.405 tick 100 01:16:15.405 tick 250 01:16:15.405 tick 100 01:16:15.405 tick 100 01:16:15.405 test_end 01:16:15.405 ************************************ 01:16:15.405 END TEST event_reactor 01:16:15.405 ************************************ 01:16:15.405 01:16:15.405 real 0m1.262s 01:16:15.405 user 0m1.127s 01:16:15.405 sys 0m0.030s 01:16:15.405 05:11:06 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:15.405 05:11:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 01:16:15.405 05:11:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 01:16:15.405 05:11:06 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:16:15.405 05:11:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:15.405 05:11:06 event -- common/autotest_common.sh@10 -- # set +x 01:16:15.405 ************************************ 01:16:15.405 START TEST event_reactor_perf 01:16:15.405 ************************************ 01:16:15.405 05:11:06 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 01:16:15.405 [2024-12-09 05:11:06.746184] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:15.405 [2024-12-09 05:11:06.746268] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59201 ] 01:16:15.405 [2024-12-09 05:11:06.887116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:15.405 [2024-12-09 05:11:06.920612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:16.777 test_start 01:16:16.777 test_end 01:16:16.777 Performance: 413530 events per second 01:16:16.777 01:16:16.777 real 0m1.247s 01:16:16.777 user 0m1.110s 01:16:16.777 sys 0m0.031s 01:16:16.777 ************************************ 01:16:16.777 END TEST event_reactor_perf 01:16:16.777 ************************************ 01:16:16.777 05:11:07 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:16.777 05:11:07 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 01:16:16.777 05:11:08 event -- event/event.sh@49 -- # uname -s 01:16:16.777 05:11:08 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 01:16:16.777 05:11:08 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 01:16:16.777 05:11:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:16.777 05:11:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:16.777 05:11:08 event -- common/autotest_common.sh@10 -- # set +x 01:16:16.777 ************************************ 01:16:16.777 START TEST event_scheduler 01:16:16.777 ************************************ 01:16:16.777 05:11:08 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 01:16:16.777 * Looking for test storage... 01:16:16.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 01:16:16.777 05:11:08 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:16:16.777 05:11:08 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 01:16:16.777 05:11:08 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:16:16.777 05:11:08 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@345 -- # : 1 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:16:16.777 05:11:08 event.event_scheduler -- scripts/common.sh@368 -- # return 0 01:16:16.777 05:11:08 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:16:16.777 05:11:08 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:16:16.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:16.777 --rc genhtml_branch_coverage=1 01:16:16.777 --rc genhtml_function_coverage=1 01:16:16.777 --rc genhtml_legend=1 01:16:16.777 --rc geninfo_all_blocks=1 01:16:16.777 --rc geninfo_unexecuted_blocks=1 01:16:16.777 01:16:16.777 ' 01:16:16.777 05:11:08 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:16:16.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:16.777 --rc genhtml_branch_coverage=1 01:16:16.777 --rc genhtml_function_coverage=1 01:16:16.777 --rc genhtml_legend=1 01:16:16.777 --rc geninfo_all_blocks=1 01:16:16.777 --rc geninfo_unexecuted_blocks=1 01:16:16.777 01:16:16.777 ' 01:16:16.777 05:11:08 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:16:16.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:16.777 --rc genhtml_branch_coverage=1 01:16:16.777 --rc genhtml_function_coverage=1 01:16:16.777 --rc genhtml_legend=1 01:16:16.777 --rc geninfo_all_blocks=1 01:16:16.777 --rc geninfo_unexecuted_blocks=1 01:16:16.777 01:16:16.777 ' 01:16:16.777 05:11:08 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:16:16.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:16.777 --rc genhtml_branch_coverage=1 01:16:16.777 --rc genhtml_function_coverage=1 01:16:16.777 --rc genhtml_legend=1 01:16:16.777 --rc geninfo_all_blocks=1 01:16:16.777 --rc geninfo_unexecuted_blocks=1 01:16:16.777 01:16:16.777 ' 01:16:16.777 05:11:08 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 01:16:16.777 05:11:08 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59270 01:16:16.777 05:11:08 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 01:16:16.777 05:11:08 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59270 01:16:16.777 05:11:08 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59270 ']' 01:16:16.777 05:11:08 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:16.777 05:11:08 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:16.777 05:11:08 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 01:16:16.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:16.777 05:11:08 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:16.777 05:11:08 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:16.777 05:11:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:16:16.777 [2024-12-09 05:11:08.193183] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:16.777 [2024-12-09 05:11:08.193239] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59270 ] 01:16:16.777 [2024-12-09 05:11:08.332021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:16:16.777 [2024-12-09 05:11:08.369750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:16.777 [2024-12-09 05:11:08.369911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:16:16.777 [2024-12-09 05:11:08.369998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:16:16.777 [2024-12-09 05:11:08.369998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:16:17.710 05:11:09 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:17.710 05:11:09 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 01:16:17.710 05:11:09 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 01:16:17.710 05:11:09 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:17.710 05:11:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:16:17.710 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:16:17.710 POWER: Cannot set governor of lcore 0 to userspace 01:16:17.710 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:16:17.710 POWER: Cannot set governor of lcore 0 to performance 01:16:17.710 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:16:17.710 POWER: Cannot set governor of lcore 0 to userspace 01:16:17.710 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:16:17.710 POWER: Cannot set governor of lcore 0 to userspace 01:16:17.710 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 01:16:17.710 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 01:16:17.710 POWER: Unable to set Power Management Environment for lcore 0 01:16:17.710 [2024-12-09 05:11:09.076358] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 01:16:17.710 [2024-12-09 05:11:09.076380] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 01:16:17.710 [2024-12-09 05:11:09.076396] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 01:16:17.710 [2024-12-09 05:11:09.076416] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 01:16:17.710 [2024-12-09 05:11:09.076464] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 01:16:17.710 [2024-12-09 05:11:09.076482] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 01:16:17.710 05:11:09 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:17.710 05:11:09 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 01:16:17.710 05:11:09 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:17.710 05:11:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:16:17.710 [2024-12-09 05:11:09.135039] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 01:16:17.710 05:11:09 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:17.710 05:11:09 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 01:16:17.711 05:11:09 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:17.711 05:11:09 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:17.711 05:11:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:16:17.711 ************************************ 01:16:17.711 START TEST scheduler_create_thread 01:16:17.711 ************************************ 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:16:17.711 2 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:16:17.711 3 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:16:17.711 4 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:16:17.711 5 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:16:17.711 6 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:16:17.711 7 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:16:17.711 8 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:16:17.711 9 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:16:17.711 10 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:17.711 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:16:18.275 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:18.275 05:11:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 01:16:18.275 05:11:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 01:16:18.275 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:18.275 05:11:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:16:19.644 ************************************ 01:16:19.644 END TEST scheduler_create_thread 01:16:19.644 ************************************ 01:16:19.644 05:11:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:19.644 01:16:19.644 real 0m1.752s 01:16:19.644 user 0m0.013s 01:16:19.644 sys 0m0.008s 01:16:19.644 05:11:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:19.644 05:11:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:16:19.644 05:11:10 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 01:16:19.644 05:11:10 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59270 01:16:19.644 05:11:10 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59270 ']' 01:16:19.644 05:11:10 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59270 01:16:19.644 05:11:10 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 01:16:19.644 05:11:10 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:19.644 05:11:10 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59270 01:16:19.644 killing process with pid 59270 01:16:19.644 05:11:10 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:16:19.644 05:11:10 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:16:19.644 05:11:10 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59270' 01:16:19.644 05:11:10 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59270 01:16:19.644 05:11:10 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59270 01:16:19.901 [2024-12-09 05:11:11.376701] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 01:16:19.901 01:16:19.901 real 0m3.477s 01:16:19.901 user 0m6.262s 01:16:19.901 sys 0m0.270s 01:16:19.901 05:11:11 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:19.901 ************************************ 01:16:19.901 END TEST event_scheduler 01:16:19.901 05:11:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:16:19.901 ************************************ 01:16:20.158 05:11:11 event -- event/event.sh@51 -- # modprobe -n nbd 01:16:20.158 05:11:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 01:16:20.158 05:11:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:20.158 05:11:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:20.158 05:11:11 event -- common/autotest_common.sh@10 -- # set +x 01:16:20.158 ************************************ 01:16:20.158 START TEST app_repeat 01:16:20.158 ************************************ 01:16:20.158 05:11:11 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 01:16:20.158 05:11:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:20.158 05:11:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:20.158 05:11:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 01:16:20.158 05:11:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 01:16:20.158 05:11:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 01:16:20.158 05:11:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 01:16:20.158 05:11:11 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 01:16:20.158 Process app_repeat pid: 59371 01:16:20.158 spdk_app_start Round 0 01:16:20.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:16:20.158 05:11:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59371 01:16:20.158 05:11:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 01:16:20.158 05:11:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59371' 01:16:20.158 05:11:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 01:16:20.158 05:11:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 01:16:20.158 05:11:11 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 01:16:20.158 05:11:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59371 /var/tmp/spdk-nbd.sock 01:16:20.158 05:11:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59371 ']' 01:16:20.158 05:11:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:16:20.158 05:11:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:20.158 05:11:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:16:20.158 05:11:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:20.158 05:11:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:16:20.158 [2024-12-09 05:11:11.568713] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:20.158 [2024-12-09 05:11:11.568909] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59371 ] 01:16:20.158 [2024-12-09 05:11:11.705178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:16:20.158 [2024-12-09 05:11:11.736369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:16:20.158 [2024-12-09 05:11:11.736555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:21.110 05:11:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:21.110 05:11:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 01:16:21.110 05:11:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:16:21.110 Malloc0 01:16:21.110 05:11:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:16:21.368 Malloc1 01:16:21.368 05:11:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:16:21.368 05:11:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:21.368 05:11:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 01:16:21.368 05:11:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 01:16:21.368 05:11:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:21.368 05:11:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 01:16:21.368 05:11:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:16:21.368 05:11:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:21.368 05:11:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 01:16:21.368 05:11:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:16:21.368 05:11:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:21.368 05:11:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:16:21.368 05:11:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 01:16:21.368 05:11:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:16:21.368 05:11:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:16:21.368 05:11:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 01:16:21.627 /dev/nbd0 01:16:21.627 05:11:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:16:21.627 05:11:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:16:21.627 05:11:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:16:21.627 05:11:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:16:21.627 05:11:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:16:21.627 05:11:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:16:21.627 05:11:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:16:21.627 05:11:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:16:21.627 05:11:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:16:21.627 05:11:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:16:21.627 05:11:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:16:21.627 1+0 records in 01:16:21.627 1+0 records out 01:16:21.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221104 s, 18.5 MB/s 01:16:21.627 05:11:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:21.627 05:11:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:16:21.627 05:11:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:21.627 05:11:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:16:21.627 05:11:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:16:21.627 05:11:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:16:21.627 05:11:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:16:21.627 05:11:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 01:16:21.886 /dev/nbd1 01:16:21.886 05:11:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:16:21.886 05:11:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:16:21.886 05:11:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:16:21.886 05:11:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:16:21.886 05:11:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:16:21.886 05:11:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:16:21.886 05:11:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:16:21.886 05:11:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:16:21.886 05:11:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:16:21.886 05:11:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:16:21.886 05:11:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:16:21.886 1+0 records in 01:16:21.886 1+0 records out 01:16:21.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200677 s, 20.4 MB/s 01:16:21.886 05:11:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:21.886 05:11:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:16:21.886 05:11:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:21.886 05:11:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:16:21.886 05:11:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:16:21.886 05:11:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:16:21.886 05:11:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:16:21.886 05:11:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:16:21.886 05:11:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:21.886 05:11:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:16:22.144 05:11:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:16:22.144 { 01:16:22.144 "bdev_name": "Malloc0", 01:16:22.144 "nbd_device": "/dev/nbd0" 01:16:22.144 }, 01:16:22.145 { 01:16:22.145 "bdev_name": "Malloc1", 01:16:22.145 "nbd_device": "/dev/nbd1" 01:16:22.145 } 01:16:22.145 ]' 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 01:16:22.145 { 01:16:22.145 "bdev_name": "Malloc0", 01:16:22.145 "nbd_device": "/dev/nbd0" 01:16:22.145 }, 01:16:22.145 { 01:16:22.145 "bdev_name": "Malloc1", 01:16:22.145 "nbd_device": "/dev/nbd1" 01:16:22.145 } 01:16:22.145 ]' 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:16:22.145 /dev/nbd1' 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:16:22.145 /dev/nbd1' 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 01:16:22.145 256+0 records in 01:16:22.145 256+0 records out 01:16:22.145 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00735829 s, 143 MB/s 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:16:22.145 256+0 records in 01:16:22.145 256+0 records out 01:16:22.145 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156367 s, 67.1 MB/s 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:16:22.145 256+0 records in 01:16:22.145 256+0 records out 01:16:22.145 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0190786 s, 55.0 MB/s 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:16:22.145 05:11:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:16:22.403 05:11:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:16:22.403 05:11:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:16:22.403 05:11:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:16:22.403 05:11:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:16:22.403 05:11:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:16:22.403 05:11:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:16:22.403 05:11:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:16:22.403 05:11:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:16:22.403 05:11:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:16:22.403 05:11:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:16:22.403 05:11:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:16:22.403 05:11:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:16:22.403 05:11:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:16:22.403 05:11:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:16:22.403 05:11:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:16:22.403 05:11:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:16:22.403 05:11:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:16:22.403 05:11:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:16:22.403 05:11:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:16:22.403 05:11:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:22.661 05:11:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:16:22.661 05:11:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:16:22.661 05:11:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:16:22.661 05:11:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:16:22.661 05:11:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:16:22.661 05:11:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 01:16:22.661 05:11:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:16:22.661 05:11:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 01:16:22.661 05:11:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 01:16:22.661 05:11:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 01:16:22.661 05:11:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 01:16:22.662 05:11:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:16:22.662 05:11:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 01:16:22.662 05:11:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 01:16:22.919 05:11:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 01:16:23.178 [2024-12-09 05:11:14.570729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:16:23.178 [2024-12-09 05:11:14.600578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:23.178 [2024-12-09 05:11:14.600733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:16:23.178 [2024-12-09 05:11:14.628330] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 01:16:23.178 [2024-12-09 05:11:14.628369] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:16:26.456 spdk_app_start Round 1 01:16:26.456 05:11:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 01:16:26.456 05:11:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 01:16:26.456 05:11:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59371 /var/tmp/spdk-nbd.sock 01:16:26.456 05:11:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59371 ']' 01:16:26.456 05:11:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:16:26.456 05:11:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:26.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:16:26.456 05:11:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:16:26.456 05:11:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:26.456 05:11:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:16:26.456 05:11:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:26.456 05:11:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 01:16:26.456 05:11:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:16:26.456 Malloc0 01:16:26.456 05:11:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:16:26.456 Malloc1 01:16:26.456 05:11:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:16:26.456 05:11:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:26.456 05:11:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 01:16:26.456 05:11:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 01:16:26.456 05:11:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:26.456 05:11:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 01:16:26.456 05:11:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:16:26.456 05:11:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:26.456 05:11:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 01:16:26.456 05:11:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:16:26.456 05:11:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:26.456 05:11:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:16:26.456 05:11:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 01:16:26.456 05:11:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:16:26.456 05:11:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:16:26.456 05:11:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 01:16:26.713 /dev/nbd0 01:16:26.713 05:11:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:16:26.713 05:11:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:16:26.713 05:11:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:16:26.713 05:11:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:16:26.713 05:11:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:16:26.713 05:11:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:16:26.713 05:11:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:16:26.713 05:11:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:16:26.713 05:11:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:16:26.713 05:11:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:16:26.713 05:11:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:16:26.713 1+0 records in 01:16:26.713 1+0 records out 01:16:26.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170167 s, 24.1 MB/s 01:16:26.713 05:11:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:26.713 05:11:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:16:26.713 05:11:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:26.713 05:11:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:16:26.713 05:11:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:16:26.713 05:11:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:16:26.713 05:11:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:16:26.713 05:11:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 01:16:26.971 /dev/nbd1 01:16:26.971 05:11:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:16:26.971 05:11:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:16:26.971 05:11:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:16:26.971 05:11:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:16:26.971 05:11:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:16:26.971 05:11:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:16:26.971 05:11:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:16:26.971 05:11:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:16:26.971 05:11:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:16:26.971 05:11:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:16:26.971 05:11:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:16:26.971 1+0 records in 01:16:26.971 1+0 records out 01:16:26.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025123 s, 16.3 MB/s 01:16:26.971 05:11:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:26.971 05:11:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:16:26.971 05:11:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:26.971 05:11:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:16:26.971 05:11:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:16:26.972 05:11:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:16:26.972 05:11:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:16:26.972 05:11:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:16:26.972 05:11:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:26.972 05:11:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:16:27.229 05:11:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:16:27.229 { 01:16:27.229 "bdev_name": "Malloc0", 01:16:27.229 "nbd_device": "/dev/nbd0" 01:16:27.229 }, 01:16:27.229 { 01:16:27.229 "bdev_name": "Malloc1", 01:16:27.229 "nbd_device": "/dev/nbd1" 01:16:27.229 } 01:16:27.229 ]' 01:16:27.229 05:11:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 01:16:27.229 { 01:16:27.229 "bdev_name": "Malloc0", 01:16:27.229 "nbd_device": "/dev/nbd0" 01:16:27.229 }, 01:16:27.229 { 01:16:27.229 "bdev_name": "Malloc1", 01:16:27.229 "nbd_device": "/dev/nbd1" 01:16:27.229 } 01:16:27.229 ]' 01:16:27.229 05:11:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:16:27.229 05:11:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:16:27.229 /dev/nbd1' 01:16:27.229 05:11:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:16:27.229 /dev/nbd1' 01:16:27.229 05:11:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:16:27.229 05:11:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 01:16:27.229 05:11:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 01:16:27.229 05:11:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 01:16:27.229 05:11:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 01:16:27.229 05:11:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 01:16:27.229 05:11:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:27.229 05:11:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:16:27.229 05:11:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 01:16:27.229 05:11:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:16:27.229 05:11:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:16:27.230 05:11:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 01:16:27.230 256+0 records in 01:16:27.230 256+0 records out 01:16:27.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00899495 s, 117 MB/s 01:16:27.230 05:11:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:16:27.230 05:11:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:16:27.230 256+0 records in 01:16:27.230 256+0 records out 01:16:27.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159115 s, 65.9 MB/s 01:16:27.230 05:11:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:16:27.230 05:11:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:16:27.230 256+0 records in 01:16:27.230 256+0 records out 01:16:27.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0186889 s, 56.1 MB/s 01:16:27.230 05:11:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 01:16:27.230 05:11:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:27.230 05:11:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:16:27.230 05:11:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 01:16:27.230 05:11:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:16:27.230 05:11:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:16:27.230 05:11:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:16:27.230 05:11:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:16:27.230 05:11:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 01:16:27.487 05:11:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:16:27.487 05:11:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 01:16:27.487 05:11:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:16:27.487 05:11:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 01:16:27.487 05:11:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:27.487 05:11:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:27.487 05:11:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:16:27.487 05:11:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 01:16:27.487 05:11:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:16:27.487 05:11:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:16:27.487 05:11:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:16:27.487 05:11:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:16:27.487 05:11:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:16:27.487 05:11:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:16:27.487 05:11:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:16:27.487 05:11:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:16:27.487 05:11:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:16:27.487 05:11:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:16:27.487 05:11:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:16:27.487 05:11:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:16:27.744 05:11:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:16:27.744 05:11:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:16:27.744 05:11:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:16:27.744 05:11:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:16:27.744 05:11:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:16:27.744 05:11:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:16:27.744 05:11:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:16:27.744 05:11:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:16:27.744 05:11:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:16:27.744 05:11:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:27.744 05:11:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:16:28.002 05:11:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:16:28.002 05:11:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:16:28.002 05:11:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:16:28.002 05:11:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:16:28.002 05:11:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 01:16:28.002 05:11:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:16:28.002 05:11:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 01:16:28.002 05:11:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 01:16:28.002 05:11:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 01:16:28.002 05:11:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 01:16:28.002 05:11:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:16:28.002 05:11:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 01:16:28.002 05:11:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 01:16:28.259 05:11:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 01:16:28.259 [2024-12-09 05:11:19.755744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:16:28.259 [2024-12-09 05:11:19.787254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:16:28.259 [2024-12-09 05:11:19.787289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:28.259 [2024-12-09 05:11:19.816548] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 01:16:28.259 [2024-12-09 05:11:19.816591] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:16:31.539 05:11:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 01:16:31.539 spdk_app_start Round 2 01:16:31.539 05:11:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 01:16:31.539 05:11:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59371 /var/tmp/spdk-nbd.sock 01:16:31.539 05:11:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59371 ']' 01:16:31.539 05:11:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:16:31.539 05:11:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:31.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:16:31.539 05:11:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:16:31.539 05:11:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:31.539 05:11:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:16:31.539 05:11:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:31.539 05:11:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 01:16:31.539 05:11:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:16:31.539 Malloc0 01:16:31.539 05:11:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:16:31.797 Malloc1 01:16:31.797 05:11:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:16:31.797 05:11:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:31.797 05:11:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 01:16:31.797 05:11:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 01:16:31.797 05:11:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:31.797 05:11:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 01:16:31.797 05:11:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:16:31.797 05:11:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:31.797 05:11:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 01:16:31.797 05:11:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:16:31.797 05:11:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:31.797 05:11:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:16:31.797 05:11:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 01:16:31.797 05:11:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:16:31.797 05:11:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:16:31.797 05:11:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 01:16:32.055 /dev/nbd0 01:16:32.055 05:11:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:16:32.055 05:11:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:16:32.055 05:11:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:16:32.055 05:11:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:16:32.055 05:11:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:16:32.055 05:11:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:16:32.055 05:11:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:16:32.055 05:11:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:16:32.055 05:11:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:16:32.055 05:11:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:16:32.055 05:11:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:16:32.055 1+0 records in 01:16:32.055 1+0 records out 01:16:32.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000145122 s, 28.2 MB/s 01:16:32.055 05:11:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:32.055 05:11:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:16:32.055 05:11:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:32.055 05:11:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:16:32.055 05:11:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:16:32.055 05:11:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:16:32.055 05:11:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:16:32.055 05:11:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 01:16:32.313 /dev/nbd1 01:16:32.313 05:11:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:16:32.313 05:11:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:16:32.313 05:11:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:16:32.313 05:11:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:16:32.313 05:11:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:16:32.313 05:11:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:16:32.313 05:11:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:16:32.313 05:11:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:16:32.313 05:11:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:16:32.313 05:11:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:16:32.313 05:11:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:16:32.313 1+0 records in 01:16:32.313 1+0 records out 01:16:32.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000107966 s, 37.9 MB/s 01:16:32.313 05:11:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:32.313 05:11:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:16:32.313 05:11:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:16:32.313 05:11:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:16:32.313 05:11:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:16:32.313 05:11:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:16:32.313 05:11:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:16:32.313 05:11:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:16:32.313 05:11:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:32.313 05:11:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:16:32.572 05:11:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:16:32.572 { 01:16:32.572 "bdev_name": "Malloc0", 01:16:32.572 "nbd_device": "/dev/nbd0" 01:16:32.572 }, 01:16:32.572 { 01:16:32.572 "bdev_name": "Malloc1", 01:16:32.572 "nbd_device": "/dev/nbd1" 01:16:32.572 } 01:16:32.572 ]' 01:16:32.572 05:11:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 01:16:32.572 { 01:16:32.572 "bdev_name": "Malloc0", 01:16:32.572 "nbd_device": "/dev/nbd0" 01:16:32.572 }, 01:16:32.572 { 01:16:32.572 "bdev_name": "Malloc1", 01:16:32.572 "nbd_device": "/dev/nbd1" 01:16:32.572 } 01:16:32.572 ]' 01:16:32.572 05:11:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:16:32.572 /dev/nbd1' 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:16:32.572 /dev/nbd1' 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 01:16:32.572 256+0 records in 01:16:32.572 256+0 records out 01:16:32.572 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120136 s, 87.3 MB/s 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:16:32.572 256+0 records in 01:16:32.572 256+0 records out 01:16:32.572 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015607 s, 67.2 MB/s 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:16:32.572 256+0 records in 01:16:32.572 256+0 records out 01:16:32.572 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247459 s, 42.4 MB/s 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:32.572 05:11:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:16:32.573 05:11:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:16:32.573 05:11:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 01:16:32.573 05:11:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:16:32.573 05:11:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:16:32.830 05:11:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:16:32.830 05:11:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:16:32.830 05:11:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:16:32.831 05:11:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:16:32.831 05:11:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:16:32.831 05:11:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:16:32.831 05:11:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:16:32.831 05:11:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:16:32.831 05:11:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:16:32.831 05:11:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:16:33.089 05:11:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:16:33.089 05:11:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:16:33.089 05:11:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:16:33.089 05:11:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:16:33.089 05:11:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:16:33.089 05:11:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:16:33.089 05:11:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:16:33.089 05:11:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:16:33.089 05:11:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:16:33.089 05:11:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:16:33.089 05:11:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:16:33.347 05:11:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:16:33.347 05:11:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:16:33.347 05:11:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:16:33.347 05:11:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:16:33.347 05:11:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:16:33.347 05:11:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 01:16:33.347 05:11:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 01:16:33.347 05:11:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 01:16:33.347 05:11:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 01:16:33.347 05:11:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 01:16:33.347 05:11:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:16:33.347 05:11:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 01:16:33.347 05:11:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 01:16:33.618 05:11:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 01:16:33.618 [2024-12-09 05:11:25.036087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:16:33.618 [2024-12-09 05:11:25.066874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:16:33.618 [2024-12-09 05:11:25.067007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:33.618 [2024-12-09 05:11:25.094859] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 01:16:33.618 [2024-12-09 05:11:25.094900] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:16:36.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:16:36.900 05:11:27 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59371 /var/tmp/spdk-nbd.sock 01:16:36.900 05:11:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59371 ']' 01:16:36.900 05:11:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:16:36.900 05:11:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:36.900 05:11:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:16:36.900 05:11:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:36.900 05:11:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:16:36.900 05:11:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:36.900 05:11:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 01:16:36.900 05:11:28 event.app_repeat -- event/event.sh@39 -- # killprocess 59371 01:16:36.900 05:11:28 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59371 ']' 01:16:36.900 05:11:28 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59371 01:16:36.900 05:11:28 event.app_repeat -- common/autotest_common.sh@959 -- # uname 01:16:36.900 05:11:28 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:36.900 05:11:28 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59371 01:16:36.900 killing process with pid 59371 01:16:36.900 05:11:28 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:36.900 05:11:28 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:36.900 05:11:28 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59371' 01:16:36.900 05:11:28 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59371 01:16:36.900 05:11:28 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59371 01:16:36.900 spdk_app_start is called in Round 0. 01:16:36.900 Shutdown signal received, stop current app iteration 01:16:36.900 Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 reinitialization... 01:16:36.900 spdk_app_start is called in Round 1. 01:16:36.900 Shutdown signal received, stop current app iteration 01:16:36.900 Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 reinitialization... 01:16:36.900 spdk_app_start is called in Round 2. 01:16:36.900 Shutdown signal received, stop current app iteration 01:16:36.900 Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 reinitialization... 01:16:36.900 spdk_app_start is called in Round 3. 01:16:36.900 Shutdown signal received, stop current app iteration 01:16:36.900 05:11:28 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 01:16:36.900 05:11:28 event.app_repeat -- event/event.sh@42 -- # return 0 01:16:36.900 01:16:36.900 real 0m16.747s 01:16:36.900 user 0m37.445s 01:16:36.900 sys 0m2.090s 01:16:36.900 05:11:28 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:36.900 05:11:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:16:36.900 ************************************ 01:16:36.900 END TEST app_repeat 01:16:36.900 ************************************ 01:16:36.900 05:11:28 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 01:16:36.900 05:11:28 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 01:16:36.900 05:11:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:36.900 05:11:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:36.900 05:11:28 event -- common/autotest_common.sh@10 -- # set +x 01:16:36.900 ************************************ 01:16:36.900 START TEST cpu_locks 01:16:36.900 ************************************ 01:16:36.900 05:11:28 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 01:16:36.900 * Looking for test storage... 01:16:36.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 01:16:36.900 05:11:28 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:16:36.900 05:11:28 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 01:16:36.900 05:11:28 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:16:36.900 05:11:28 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@345 -- # : 1 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:16:36.900 05:11:28 event.cpu_locks -- scripts/common.sh@368 -- # return 0 01:16:36.900 05:11:28 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:16:36.900 05:11:28 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:16:36.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:36.900 --rc genhtml_branch_coverage=1 01:16:36.900 --rc genhtml_function_coverage=1 01:16:36.900 --rc genhtml_legend=1 01:16:36.900 --rc geninfo_all_blocks=1 01:16:36.900 --rc geninfo_unexecuted_blocks=1 01:16:36.900 01:16:36.900 ' 01:16:36.900 05:11:28 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:16:36.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:36.900 --rc genhtml_branch_coverage=1 01:16:36.900 --rc genhtml_function_coverage=1 01:16:36.900 --rc genhtml_legend=1 01:16:36.900 --rc geninfo_all_blocks=1 01:16:36.900 --rc geninfo_unexecuted_blocks=1 01:16:36.900 01:16:36.900 ' 01:16:36.900 05:11:28 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:16:36.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:36.900 --rc genhtml_branch_coverage=1 01:16:36.900 --rc genhtml_function_coverage=1 01:16:36.900 --rc genhtml_legend=1 01:16:36.900 --rc geninfo_all_blocks=1 01:16:36.900 --rc geninfo_unexecuted_blocks=1 01:16:36.900 01:16:36.900 ' 01:16:36.900 05:11:28 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:16:36.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:36.900 --rc genhtml_branch_coverage=1 01:16:36.900 --rc genhtml_function_coverage=1 01:16:36.900 --rc genhtml_legend=1 01:16:36.900 --rc geninfo_all_blocks=1 01:16:36.900 --rc geninfo_unexecuted_blocks=1 01:16:36.900 01:16:36.900 ' 01:16:36.900 05:11:28 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 01:16:36.900 05:11:28 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 01:16:36.900 05:11:28 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 01:16:36.900 05:11:28 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 01:16:36.900 05:11:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:36.900 05:11:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:36.900 05:11:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:16:36.900 ************************************ 01:16:36.900 START TEST default_locks 01:16:36.900 ************************************ 01:16:36.900 05:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 01:16:36.900 05:11:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59982 01:16:36.901 05:11:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59982 01:16:36.901 05:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59982 ']' 01:16:36.901 05:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:36.901 05:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:36.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:36.901 05:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:36.901 05:11:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:16:36.901 05:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:36.901 05:11:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:16:37.159 [2024-12-09 05:11:28.520739] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:37.159 [2024-12-09 05:11:28.520800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59982 ] 01:16:37.159 [2024-12-09 05:11:28.656990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:37.159 [2024-12-09 05:11:28.687676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:38.134 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:38.134 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 01:16:38.134 05:11:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59982 01:16:38.134 05:11:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59982 01:16:38.134 05:11:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:16:38.134 05:11:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59982 01:16:38.134 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59982 ']' 01:16:38.134 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59982 01:16:38.134 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 01:16:38.134 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:38.134 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59982 01:16:38.134 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:38.134 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:38.134 killing process with pid 59982 01:16:38.134 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59982' 01:16:38.134 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59982 01:16:38.134 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59982 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59982 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59982 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59982 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59982 ']' 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:38.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:16:38.392 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59982) - No such process 01:16:38.392 ERROR: process (pid: 59982) is no longer running 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 01:16:38.392 01:16:38.392 real 0m1.436s 01:16:38.392 user 0m1.571s 01:16:38.392 sys 0m0.355s 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:38.392 05:11:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:16:38.392 ************************************ 01:16:38.392 END TEST default_locks 01:16:38.392 ************************************ 01:16:38.392 05:11:29 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 01:16:38.392 05:11:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:38.392 05:11:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:38.392 05:11:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:16:38.392 ************************************ 01:16:38.392 START TEST default_locks_via_rpc 01:16:38.392 ************************************ 01:16:38.392 05:11:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 01:16:38.392 05:11:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60040 01:16:38.392 05:11:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60040 01:16:38.392 05:11:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60040 ']' 01:16:38.392 05:11:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:16:38.392 05:11:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:38.392 05:11:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:38.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:38.392 05:11:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:38.392 05:11:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:38.392 05:11:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:16:38.392 [2024-12-09 05:11:30.003979] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:38.392 [2024-12-09 05:11:30.004064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60040 ] 01:16:38.650 [2024-12-09 05:11:30.146080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:38.650 [2024-12-09 05:11:30.176624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:39.584 05:11:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:39.584 05:11:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:16:39.584 05:11:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 01:16:39.584 05:11:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:39.584 05:11:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:16:39.584 05:11:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:39.585 05:11:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 01:16:39.585 05:11:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 01:16:39.585 05:11:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 01:16:39.585 05:11:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 01:16:39.585 05:11:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 01:16:39.585 05:11:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:39.585 05:11:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:16:39.585 05:11:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:39.585 05:11:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60040 01:16:39.585 05:11:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60040 01:16:39.585 05:11:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:16:39.585 05:11:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60040 01:16:39.585 05:11:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60040 ']' 01:16:39.585 05:11:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60040 01:16:39.585 05:11:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 01:16:39.585 05:11:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:39.585 05:11:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60040 01:16:39.585 05:11:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:39.585 05:11:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:39.585 killing process with pid 60040 01:16:39.585 05:11:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60040' 01:16:39.585 05:11:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60040 01:16:39.585 05:11:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60040 01:16:39.843 ************************************ 01:16:39.843 END TEST default_locks_via_rpc 01:16:39.843 ************************************ 01:16:39.843 01:16:39.843 real 0m1.333s 01:16:39.843 user 0m1.455s 01:16:39.843 sys 0m0.325s 01:16:39.843 05:11:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:39.843 05:11:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:16:39.843 05:11:31 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 01:16:39.843 05:11:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:39.843 05:11:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:39.843 05:11:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:16:39.843 ************************************ 01:16:39.843 START TEST non_locking_app_on_locked_coremask 01:16:39.843 ************************************ 01:16:39.843 05:11:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 01:16:39.843 05:11:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60104 01:16:39.843 05:11:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60104 /var/tmp/spdk.sock 01:16:39.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:39.843 05:11:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60104 ']' 01:16:39.843 05:11:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:39.843 05:11:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:39.843 05:11:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:39.843 05:11:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:16:39.844 05:11:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:39.844 05:11:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:39.844 [2024-12-09 05:11:31.369229] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:39.844 [2024-12-09 05:11:31.369398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60104 ] 01:16:40.102 [2024-12-09 05:11:31.508190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:40.102 [2024-12-09 05:11:31.538480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:40.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:16:40.669 05:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:40.669 05:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 01:16:40.669 05:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60132 01:16:40.669 05:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60132 /var/tmp/spdk2.sock 01:16:40.669 05:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60132 ']' 01:16:40.669 05:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 01:16:40.669 05:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:16:40.669 05:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:40.669 05:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:16:40.669 05:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:40.669 05:11:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:40.669 [2024-12-09 05:11:32.280113] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:40.669 [2024-12-09 05:11:32.280328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60132 ] 01:16:40.928 [2024-12-09 05:11:32.421400] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:16:40.928 [2024-12-09 05:11:32.421430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:40.928 [2024-12-09 05:11:32.483759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:41.877 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:41.877 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 01:16:41.877 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60104 01:16:41.877 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60104 01:16:41.877 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:16:42.137 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60104 01:16:42.137 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60104 ']' 01:16:42.137 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60104 01:16:42.137 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 01:16:42.137 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:42.137 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60104 01:16:42.137 killing process with pid 60104 01:16:42.137 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:42.137 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:42.137 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60104' 01:16:42.137 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60104 01:16:42.137 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60104 01:16:42.395 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60132 01:16:42.396 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60132 ']' 01:16:42.396 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60132 01:16:42.396 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 01:16:42.396 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:42.396 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60132 01:16:42.396 killing process with pid 60132 01:16:42.396 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:42.396 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:42.396 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60132' 01:16:42.396 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60132 01:16:42.396 05:11:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60132 01:16:42.654 01:16:42.654 real 0m2.880s 01:16:42.654 user 0m3.279s 01:16:42.654 sys 0m0.677s 01:16:42.655 05:11:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:42.655 05:11:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:42.655 ************************************ 01:16:42.655 END TEST non_locking_app_on_locked_coremask 01:16:42.655 ************************************ 01:16:42.655 05:11:34 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 01:16:42.655 05:11:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:42.655 05:11:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:42.655 05:11:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:16:42.655 ************************************ 01:16:42.655 START TEST locking_app_on_unlocked_coremask 01:16:42.655 ************************************ 01:16:42.655 05:11:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 01:16:42.655 05:11:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60201 01:16:42.655 05:11:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60201 /var/tmp/spdk.sock 01:16:42.655 05:11:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60201 ']' 01:16:42.655 05:11:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:42.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:42.655 05:11:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:42.655 05:11:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:42.655 05:11:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 01:16:42.655 05:11:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:42.655 05:11:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:42.912 [2024-12-09 05:11:34.288011] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:42.912 [2024-12-09 05:11:34.288073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60201 ] 01:16:42.912 [2024-12-09 05:11:34.418369] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:16:42.912 [2024-12-09 05:11:34.418504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:42.912 [2024-12-09 05:11:34.448581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:43.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:16:43.847 05:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:43.847 05:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 01:16:43.847 05:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60223 01:16:43.847 05:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60223 /var/tmp/spdk2.sock 01:16:43.847 05:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60223 ']' 01:16:43.847 05:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:16:43.847 05:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 01:16:43.847 05:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:43.847 05:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:16:43.847 05:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:43.847 05:11:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:43.847 [2024-12-09 05:11:35.159490] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:43.847 [2024-12-09 05:11:35.159692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60223 ] 01:16:43.847 [2024-12-09 05:11:35.301038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:43.847 [2024-12-09 05:11:35.359309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:44.782 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:44.782 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 01:16:44.782 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60223 01:16:44.782 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60223 01:16:44.782 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:16:44.782 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60201 01:16:44.782 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60201 ']' 01:16:44.782 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60201 01:16:44.782 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 01:16:44.782 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:44.782 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60201 01:16:44.782 killing process with pid 60201 01:16:44.782 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:44.782 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:44.782 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60201' 01:16:44.782 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60201 01:16:44.782 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60201 01:16:45.348 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60223 01:16:45.348 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60223 ']' 01:16:45.348 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60223 01:16:45.348 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 01:16:45.348 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:45.348 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60223 01:16:45.348 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:45.348 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:45.348 killing process with pid 60223 01:16:45.348 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60223' 01:16:45.348 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60223 01:16:45.348 05:11:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60223 01:16:45.606 ************************************ 01:16:45.606 END TEST locking_app_on_unlocked_coremask 01:16:45.606 ************************************ 01:16:45.606 01:16:45.606 real 0m2.758s 01:16:45.606 user 0m3.137s 01:16:45.606 sys 0m0.622s 01:16:45.606 05:11:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:45.606 05:11:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:45.606 05:11:37 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 01:16:45.606 05:11:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:45.606 05:11:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:45.606 05:11:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:16:45.606 ************************************ 01:16:45.606 START TEST locking_app_on_locked_coremask 01:16:45.606 ************************************ 01:16:45.606 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 01:16:45.606 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60292 01:16:45.606 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60292 /var/tmp/spdk.sock 01:16:45.606 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60292 ']' 01:16:45.606 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:45.606 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:16:45.606 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:45.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:45.606 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:45.606 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:45.606 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:45.606 [2024-12-09 05:11:37.091089] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:45.606 [2024-12-09 05:11:37.091275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60292 ] 01:16:45.865 [2024-12-09 05:11:37.226774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:45.865 [2024-12-09 05:11:37.257987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:45.865 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:45.865 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 01:16:45.865 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60307 01:16:45.865 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60307 /var/tmp/spdk2.sock 01:16:45.865 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 01:16:45.865 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 01:16:45.865 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60307 /var/tmp/spdk2.sock 01:16:45.865 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 01:16:45.865 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:45.865 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 01:16:45.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:16:45.865 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:45.865 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60307 /var/tmp/spdk2.sock 01:16:45.865 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60307 ']' 01:16:45.865 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:16:45.865 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:45.865 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:16:45.865 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:45.865 05:11:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:45.865 [2024-12-09 05:11:37.470602] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:45.865 [2024-12-09 05:11:37.470662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60307 ] 01:16:46.123 [2024-12-09 05:11:37.614590] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60292 has claimed it. 01:16:46.123 [2024-12-09 05:11:37.614627] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 01:16:46.689 ERROR: process (pid: 60307) is no longer running 01:16:46.689 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60307) - No such process 01:16:46.689 05:11:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:46.690 05:11:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 01:16:46.690 05:11:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 01:16:46.690 05:11:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:16:46.690 05:11:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:16:46.690 05:11:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:16:46.690 05:11:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60292 01:16:46.690 05:11:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60292 01:16:46.690 05:11:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:16:46.947 05:11:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60292 01:16:46.947 05:11:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60292 ']' 01:16:46.947 05:11:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60292 01:16:46.947 05:11:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 01:16:46.947 05:11:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:46.947 05:11:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60292 01:16:46.947 killing process with pid 60292 01:16:46.947 05:11:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:46.947 05:11:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:46.947 05:11:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60292' 01:16:46.947 05:11:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60292 01:16:46.947 05:11:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60292 01:16:47.205 ************************************ 01:16:47.205 END TEST locking_app_on_locked_coremask 01:16:47.205 ************************************ 01:16:47.205 01:16:47.205 real 0m1.604s 01:16:47.205 user 0m1.809s 01:16:47.205 sys 0m0.409s 01:16:47.205 05:11:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:47.205 05:11:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:47.205 05:11:38 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 01:16:47.205 05:11:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:47.205 05:11:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:47.205 05:11:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:16:47.205 ************************************ 01:16:47.205 START TEST locking_overlapped_coremask 01:16:47.205 ************************************ 01:16:47.205 05:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 01:16:47.205 05:11:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60353 01:16:47.205 05:11:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60353 /var/tmp/spdk.sock 01:16:47.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:47.205 05:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60353 ']' 01:16:47.205 05:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:47.205 05:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:47.205 05:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:47.205 05:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:47.205 05:11:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:47.205 05:11:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 01:16:47.205 [2024-12-09 05:11:38.741295] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:47.205 [2024-12-09 05:11:38.741350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60353 ] 01:16:47.464 [2024-12-09 05:11:38.871403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:16:47.464 [2024-12-09 05:11:38.903349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:16:47.464 [2024-12-09 05:11:38.903636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:16:47.464 [2024-12-09 05:11:38.903733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:48.029 05:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:48.029 05:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 01:16:48.029 05:11:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 01:16:48.029 05:11:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60383 01:16:48.029 05:11:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60383 /var/tmp/spdk2.sock 01:16:48.029 05:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 01:16:48.029 05:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60383 /var/tmp/spdk2.sock 01:16:48.029 05:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 01:16:48.029 05:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:48.029 05:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 01:16:48.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:16:48.029 05:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:48.029 05:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60383 /var/tmp/spdk2.sock 01:16:48.029 05:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60383 ']' 01:16:48.029 05:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:16:48.029 05:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:48.029 05:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:16:48.029 05:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:48.029 05:11:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:48.029 [2024-12-09 05:11:39.615325] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:48.029 [2024-12-09 05:11:39.615378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60383 ] 01:16:48.288 [2024-12-09 05:11:39.771614] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60353 has claimed it. 01:16:48.288 [2024-12-09 05:11:39.771656] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 01:16:48.859 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60383) - No such process 01:16:48.859 ERROR: process (pid: 60383) is no longer running 01:16:48.859 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:48.859 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 01:16:48.859 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 01:16:48.859 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:16:48.859 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:16:48.859 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:16:48.859 05:11:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 01:16:48.859 05:11:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 01:16:48.859 05:11:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 01:16:48.859 05:11:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 01:16:48.859 05:11:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60353 01:16:48.859 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60353 ']' 01:16:48.859 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60353 01:16:48.859 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 01:16:48.859 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:48.859 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60353 01:16:48.859 killing process with pid 60353 01:16:48.859 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:48.859 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:48.859 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60353' 01:16:48.859 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60353 01:16:48.859 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60353 01:16:49.117 ************************************ 01:16:49.117 END TEST locking_overlapped_coremask 01:16:49.117 ************************************ 01:16:49.117 01:16:49.117 real 0m1.833s 01:16:49.117 user 0m5.167s 01:16:49.117 sys 0m0.295s 01:16:49.117 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:49.117 05:11:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:16:49.117 05:11:40 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 01:16:49.117 05:11:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:49.117 05:11:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:49.117 05:11:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:16:49.117 ************************************ 01:16:49.117 START TEST locking_overlapped_coremask_via_rpc 01:16:49.117 ************************************ 01:16:49.117 05:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 01:16:49.117 05:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60429 01:16:49.117 05:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60429 /var/tmp/spdk.sock 01:16:49.117 05:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60429 ']' 01:16:49.117 05:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:49.117 05:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:49.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:49.117 05:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:49.117 05:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 01:16:49.118 05:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:49.118 05:11:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:16:49.118 [2024-12-09 05:11:40.609604] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:49.118 [2024-12-09 05:11:40.609656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60429 ] 01:16:49.376 [2024-12-09 05:11:40.743210] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:16:49.376 [2024-12-09 05:11:40.743234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:16:49.376 [2024-12-09 05:11:40.776196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:16:49.376 [2024-12-09 05:11:40.776472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:49.376 [2024-12-09 05:11:40.776472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:16:49.943 05:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:49.943 05:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:16:49.943 05:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60459 01:16:49.943 05:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 01:16:49.943 05:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60459 /var/tmp/spdk2.sock 01:16:49.943 05:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60459 ']' 01:16:49.943 05:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:16:49.943 05:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:49.943 05:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:16:49.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:16:49.943 05:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:49.943 05:11:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:16:49.943 [2024-12-09 05:11:41.531084] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:49.943 [2024-12-09 05:11:41.531139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60459 ] 01:16:50.201 [2024-12-09 05:11:41.684935] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:16:50.201 [2024-12-09 05:11:41.684962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:16:50.202 [2024-12-09 05:11:41.757592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:16:50.202 [2024-12-09 05:11:41.757778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:16:50.202 [2024-12-09 05:11:41.757782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:16:51.137 [2024-12-09 05:11:42.430633] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60429 has claimed it. 01:16:51.137 request: 01:16:51.137 { 01:16:51.137 2024/12/09 05:11:42 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 01:16:51.137 "method": "framework_enable_cpumask_locks", 01:16:51.137 "params": {} 01:16:51.137 } 01:16:51.137 Got JSON-RPC error response 01:16:51.137 GoRPCClient: error on JSON-RPC call 01:16:51.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60429 /var/tmp/spdk.sock 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60429 ']' 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:16:51.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60459 /var/tmp/spdk2.sock 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60459 ']' 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:51.137 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:16:51.394 ************************************ 01:16:51.394 END TEST locking_overlapped_coremask_via_rpc 01:16:51.394 ************************************ 01:16:51.394 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:51.394 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:16:51.394 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 01:16:51.394 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 01:16:51.394 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 01:16:51.394 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 01:16:51.394 01:16:51.394 real 0m2.287s 01:16:51.394 user 0m1.054s 01:16:51.394 sys 0m0.168s 01:16:51.394 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:51.394 05:11:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:16:51.394 05:11:42 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 01:16:51.394 05:11:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60429 ]] 01:16:51.394 05:11:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60429 01:16:51.394 05:11:42 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60429 ']' 01:16:51.394 05:11:42 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60429 01:16:51.394 05:11:42 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 01:16:51.394 05:11:42 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:51.394 05:11:42 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60429 01:16:51.394 killing process with pid 60429 01:16:51.394 05:11:42 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:51.394 05:11:42 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:51.394 05:11:42 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60429' 01:16:51.395 05:11:42 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60429 01:16:51.395 05:11:42 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60429 01:16:51.652 05:11:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60459 ]] 01:16:51.652 05:11:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60459 01:16:51.652 05:11:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60459 ']' 01:16:51.652 05:11:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60459 01:16:51.652 05:11:43 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 01:16:51.652 05:11:43 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:51.652 05:11:43 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60459 01:16:51.652 killing process with pid 60459 01:16:51.652 05:11:43 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:16:51.652 05:11:43 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:16:51.652 05:11:43 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60459' 01:16:51.652 05:11:43 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60459 01:16:51.652 05:11:43 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60459 01:16:51.910 05:11:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 01:16:51.910 Process with pid 60429 is not found 01:16:51.910 05:11:43 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 01:16:51.910 05:11:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60429 ]] 01:16:51.910 05:11:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60429 01:16:51.910 05:11:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60429 ']' 01:16:51.910 05:11:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60429 01:16:51.910 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60429) - No such process 01:16:51.910 05:11:43 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60429 is not found' 01:16:51.910 05:11:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60459 ]] 01:16:51.910 05:11:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60459 01:16:51.910 05:11:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60459 ']' 01:16:51.910 05:11:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60459 01:16:51.910 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60459) - No such process 01:16:51.911 Process with pid 60459 is not found 01:16:51.911 05:11:43 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60459 is not found' 01:16:51.911 05:11:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 01:16:51.911 01:16:51.911 real 0m15.050s 01:16:51.911 user 0m27.808s 01:16:51.911 sys 0m3.421s 01:16:51.911 ************************************ 01:16:51.911 END TEST cpu_locks 01:16:51.911 ************************************ 01:16:51.911 05:11:43 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:51.911 05:11:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:16:51.911 ************************************ 01:16:51.911 END TEST event 01:16:51.911 ************************************ 01:16:51.911 01:16:51.911 real 0m39.452s 01:16:51.911 user 1m18.039s 01:16:51.911 sys 0m6.097s 01:16:51.911 05:11:43 event -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:51.911 05:11:43 event -- common/autotest_common.sh@10 -- # set +x 01:16:51.911 05:11:43 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 01:16:51.911 05:11:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:51.911 05:11:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:51.911 05:11:43 -- common/autotest_common.sh@10 -- # set +x 01:16:51.911 ************************************ 01:16:51.911 START TEST thread 01:16:51.911 ************************************ 01:16:51.911 05:11:43 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 01:16:51.911 * Looking for test storage... 01:16:51.911 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 01:16:51.911 05:11:43 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:16:51.911 05:11:43 thread -- common/autotest_common.sh@1693 -- # lcov --version 01:16:51.911 05:11:43 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:16:52.169 05:11:43 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:16:52.169 05:11:43 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:16:52.169 05:11:43 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 01:16:52.169 05:11:43 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 01:16:52.169 05:11:43 thread -- scripts/common.sh@336 -- # IFS=.-: 01:16:52.169 05:11:43 thread -- scripts/common.sh@336 -- # read -ra ver1 01:16:52.169 05:11:43 thread -- scripts/common.sh@337 -- # IFS=.-: 01:16:52.169 05:11:43 thread -- scripts/common.sh@337 -- # read -ra ver2 01:16:52.169 05:11:43 thread -- scripts/common.sh@338 -- # local 'op=<' 01:16:52.169 05:11:43 thread -- scripts/common.sh@340 -- # ver1_l=2 01:16:52.169 05:11:43 thread -- scripts/common.sh@341 -- # ver2_l=1 01:16:52.169 05:11:43 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:16:52.169 05:11:43 thread -- scripts/common.sh@344 -- # case "$op" in 01:16:52.169 05:11:43 thread -- scripts/common.sh@345 -- # : 1 01:16:52.169 05:11:43 thread -- scripts/common.sh@364 -- # (( v = 0 )) 01:16:52.169 05:11:43 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:16:52.169 05:11:43 thread -- scripts/common.sh@365 -- # decimal 1 01:16:52.169 05:11:43 thread -- scripts/common.sh@353 -- # local d=1 01:16:52.169 05:11:43 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:16:52.169 05:11:43 thread -- scripts/common.sh@355 -- # echo 1 01:16:52.169 05:11:43 thread -- scripts/common.sh@365 -- # ver1[v]=1 01:16:52.169 05:11:43 thread -- scripts/common.sh@366 -- # decimal 2 01:16:52.169 05:11:43 thread -- scripts/common.sh@353 -- # local d=2 01:16:52.169 05:11:43 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:16:52.169 05:11:43 thread -- scripts/common.sh@355 -- # echo 2 01:16:52.169 05:11:43 thread -- scripts/common.sh@366 -- # ver2[v]=2 01:16:52.169 05:11:43 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:16:52.169 05:11:43 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:16:52.169 05:11:43 thread -- scripts/common.sh@368 -- # return 0 01:16:52.169 05:11:43 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:16:52.169 05:11:43 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:16:52.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:52.169 --rc genhtml_branch_coverage=1 01:16:52.169 --rc genhtml_function_coverage=1 01:16:52.169 --rc genhtml_legend=1 01:16:52.169 --rc geninfo_all_blocks=1 01:16:52.169 --rc geninfo_unexecuted_blocks=1 01:16:52.169 01:16:52.169 ' 01:16:52.169 05:11:43 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:16:52.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:52.169 --rc genhtml_branch_coverage=1 01:16:52.169 --rc genhtml_function_coverage=1 01:16:52.169 --rc genhtml_legend=1 01:16:52.169 --rc geninfo_all_blocks=1 01:16:52.169 --rc geninfo_unexecuted_blocks=1 01:16:52.169 01:16:52.169 ' 01:16:52.169 05:11:43 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:16:52.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:52.169 --rc genhtml_branch_coverage=1 01:16:52.169 --rc genhtml_function_coverage=1 01:16:52.169 --rc genhtml_legend=1 01:16:52.169 --rc geninfo_all_blocks=1 01:16:52.169 --rc geninfo_unexecuted_blocks=1 01:16:52.169 01:16:52.169 ' 01:16:52.169 05:11:43 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:16:52.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:52.169 --rc genhtml_branch_coverage=1 01:16:52.169 --rc genhtml_function_coverage=1 01:16:52.169 --rc genhtml_legend=1 01:16:52.169 --rc geninfo_all_blocks=1 01:16:52.169 --rc geninfo_unexecuted_blocks=1 01:16:52.169 01:16:52.169 ' 01:16:52.169 05:11:43 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 01:16:52.169 05:11:43 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 01:16:52.169 05:11:43 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:52.169 05:11:43 thread -- common/autotest_common.sh@10 -- # set +x 01:16:52.169 ************************************ 01:16:52.169 START TEST thread_poller_perf 01:16:52.169 ************************************ 01:16:52.169 05:11:43 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 01:16:52.169 [2024-12-09 05:11:43.621125] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:52.169 [2024-12-09 05:11:43.621191] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60609 ] 01:16:52.169 [2024-12-09 05:11:43.754876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:52.169 [2024-12-09 05:11:43.785036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:52.169 Running 1000 pollers for 1 seconds with 1 microseconds period. 01:16:53.554 [2024-12-09T05:11:45.171Z] ====================================== 01:16:53.554 [2024-12-09T05:11:45.171Z] busy:2608935708 (cyc) 01:16:53.554 [2024-12-09T05:11:45.171Z] total_run_count: 410000 01:16:53.554 [2024-12-09T05:11:45.171Z] tsc_hz: 2600000000 (cyc) 01:16:53.554 [2024-12-09T05:11:45.171Z] ====================================== 01:16:53.554 [2024-12-09T05:11:45.171Z] poller_cost: 6363 (cyc), 2447 (nsec) 01:16:53.554 01:16:53.554 real 0m1.244s 01:16:53.554 user 0m1.115s 01:16:53.554 sys 0m0.024s 01:16:53.554 05:11:44 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:53.554 05:11:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 01:16:53.554 ************************************ 01:16:53.554 END TEST thread_poller_perf 01:16:53.554 ************************************ 01:16:53.554 05:11:44 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 01:16:53.554 05:11:44 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 01:16:53.554 05:11:44 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:53.554 05:11:44 thread -- common/autotest_common.sh@10 -- # set +x 01:16:53.554 ************************************ 01:16:53.554 START TEST thread_poller_perf 01:16:53.554 ************************************ 01:16:53.554 05:11:44 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 01:16:53.554 [2024-12-09 05:11:44.904291] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:53.554 [2024-12-09 05:11:44.904599] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60639 ] 01:16:53.554 [2024-12-09 05:11:45.041772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:53.554 Running 1000 pollers for 1 seconds with 0 microseconds period. 01:16:53.554 [2024-12-09 05:11:45.071893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:54.926 [2024-12-09T05:11:46.543Z] ====================================== 01:16:54.926 [2024-12-09T05:11:46.543Z] busy:2601774312 (cyc) 01:16:54.926 [2024-12-09T05:11:46.543Z] total_run_count: 5463000 01:16:54.926 [2024-12-09T05:11:46.543Z] tsc_hz: 2600000000 (cyc) 01:16:54.926 [2024-12-09T05:11:46.543Z] ====================================== 01:16:54.926 [2024-12-09T05:11:46.543Z] poller_cost: 476 (cyc), 183 (nsec) 01:16:54.926 01:16:54.926 real 0m1.238s 01:16:54.926 user 0m1.110s 01:16:54.926 sys 0m0.022s 01:16:54.926 ************************************ 01:16:54.926 END TEST thread_poller_perf 01:16:54.926 ************************************ 01:16:54.926 05:11:46 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:54.926 05:11:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 01:16:54.926 05:11:46 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 01:16:54.926 01:16:54.926 real 0m2.720s 01:16:54.926 user 0m2.343s 01:16:54.926 sys 0m0.163s 01:16:54.926 ************************************ 01:16:54.926 END TEST thread 01:16:54.926 ************************************ 01:16:54.926 05:11:46 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:54.926 05:11:46 thread -- common/autotest_common.sh@10 -- # set +x 01:16:54.926 05:11:46 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 01:16:54.926 05:11:46 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 01:16:54.926 05:11:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:54.926 05:11:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:54.926 05:11:46 -- common/autotest_common.sh@10 -- # set +x 01:16:54.926 ************************************ 01:16:54.926 START TEST app_cmdline 01:16:54.926 ************************************ 01:16:54.926 05:11:46 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 01:16:54.926 * Looking for test storage... 01:16:54.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 01:16:54.926 05:11:46 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:16:54.926 05:11:46 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 01:16:54.926 05:11:46 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:16:54.926 05:11:46 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:16:54.926 05:11:46 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:16:54.926 05:11:46 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 01:16:54.926 05:11:46 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@345 -- # : 1 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@365 -- # decimal 1 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@353 -- # local d=1 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@355 -- # echo 1 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@366 -- # decimal 2 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@353 -- # local d=2 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@355 -- # echo 2 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:16:54.927 05:11:46 app_cmdline -- scripts/common.sh@368 -- # return 0 01:16:54.927 05:11:46 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:16:54.927 05:11:46 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:16:54.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:54.927 --rc genhtml_branch_coverage=1 01:16:54.927 --rc genhtml_function_coverage=1 01:16:54.927 --rc genhtml_legend=1 01:16:54.927 --rc geninfo_all_blocks=1 01:16:54.927 --rc geninfo_unexecuted_blocks=1 01:16:54.927 01:16:54.927 ' 01:16:54.927 05:11:46 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:16:54.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:54.927 --rc genhtml_branch_coverage=1 01:16:54.927 --rc genhtml_function_coverage=1 01:16:54.927 --rc genhtml_legend=1 01:16:54.927 --rc geninfo_all_blocks=1 01:16:54.927 --rc geninfo_unexecuted_blocks=1 01:16:54.927 01:16:54.927 ' 01:16:54.927 05:11:46 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:16:54.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:54.927 --rc genhtml_branch_coverage=1 01:16:54.927 --rc genhtml_function_coverage=1 01:16:54.927 --rc genhtml_legend=1 01:16:54.927 --rc geninfo_all_blocks=1 01:16:54.927 --rc geninfo_unexecuted_blocks=1 01:16:54.927 01:16:54.927 ' 01:16:54.927 05:11:46 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:16:54.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:54.927 --rc genhtml_branch_coverage=1 01:16:54.927 --rc genhtml_function_coverage=1 01:16:54.927 --rc genhtml_legend=1 01:16:54.927 --rc geninfo_all_blocks=1 01:16:54.927 --rc geninfo_unexecuted_blocks=1 01:16:54.927 01:16:54.927 ' 01:16:54.927 05:11:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 01:16:54.927 05:11:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60721 01:16:54.927 05:11:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60721 01:16:54.927 05:11:46 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 01:16:54.927 05:11:46 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60721 ']' 01:16:54.927 05:11:46 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:54.927 05:11:46 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:54.927 05:11:46 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:54.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:54.927 05:11:46 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:54.927 05:11:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:16:54.927 [2024-12-09 05:11:46.403375] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:54.927 [2024-12-09 05:11:46.403590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60721 ] 01:16:54.927 [2024-12-09 05:11:46.534323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:55.184 [2024-12-09 05:11:46.564839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:55.747 05:11:47 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:55.747 05:11:47 app_cmdline -- common/autotest_common.sh@868 -- # return 0 01:16:55.747 05:11:47 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 01:16:56.061 { 01:16:56.061 "fields": { 01:16:56.061 "commit": "66902d69a", 01:16:56.061 "major": 25, 01:16:56.061 "minor": 1, 01:16:56.061 "patch": 0, 01:16:56.061 "suffix": "-pre" 01:16:56.061 }, 01:16:56.061 "version": "SPDK v25.01-pre git sha1 66902d69a" 01:16:56.061 } 01:16:56.061 05:11:47 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 01:16:56.061 05:11:47 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 01:16:56.061 05:11:47 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 01:16:56.061 05:11:47 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 01:16:56.061 05:11:47 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 01:16:56.061 05:11:47 app_cmdline -- app/cmdline.sh@26 -- # sort 01:16:56.061 05:11:47 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:56.061 05:11:47 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 01:16:56.061 05:11:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:16:56.061 05:11:47 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:56.061 05:11:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 01:16:56.061 05:11:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 01:16:56.061 05:11:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:16:56.061 05:11:47 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 01:16:56.061 05:11:47 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:16:56.061 05:11:47 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:16:56.061 05:11:47 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:56.061 05:11:47 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:16:56.061 05:11:47 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:56.061 05:11:47 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:16:56.061 05:11:47 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:56.061 05:11:47 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:16:56.061 05:11:47 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:16:56.061 05:11:47 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:16:56.061 2024/12/09 05:11:47 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 01:16:56.061 request: 01:16:56.061 { 01:16:56.061 "method": "env_dpdk_get_mem_stats", 01:16:56.061 "params": {} 01:16:56.061 } 01:16:56.061 Got JSON-RPC error response 01:16:56.061 GoRPCClient: error on JSON-RPC call 01:16:56.061 05:11:47 app_cmdline -- common/autotest_common.sh@655 -- # es=1 01:16:56.061 05:11:47 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:16:56.061 05:11:47 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:16:56.061 05:11:47 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:16:56.061 05:11:47 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60721 01:16:56.061 05:11:47 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60721 ']' 01:16:56.061 05:11:47 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60721 01:16:56.061 05:11:47 app_cmdline -- common/autotest_common.sh@959 -- # uname 01:16:56.318 05:11:47 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:16:56.318 05:11:47 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60721 01:16:56.318 05:11:47 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:16:56.318 05:11:47 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:16:56.318 killing process with pid 60721 01:16:56.318 05:11:47 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60721' 01:16:56.318 05:11:47 app_cmdline -- common/autotest_common.sh@973 -- # kill 60721 01:16:56.318 05:11:47 app_cmdline -- common/autotest_common.sh@978 -- # wait 60721 01:16:56.318 01:16:56.318 real 0m1.708s 01:16:56.318 user 0m2.088s 01:16:56.318 sys 0m0.331s 01:16:56.318 05:11:47 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:56.318 ************************************ 01:16:56.318 END TEST app_cmdline 01:16:56.318 ************************************ 01:16:56.318 05:11:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:16:56.589 05:11:47 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 01:16:56.589 05:11:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:56.589 05:11:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:56.589 05:11:47 -- common/autotest_common.sh@10 -- # set +x 01:16:56.589 ************************************ 01:16:56.589 START TEST version 01:16:56.589 ************************************ 01:16:56.589 05:11:47 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 01:16:56.589 * Looking for test storage... 01:16:56.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 01:16:56.589 05:11:48 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:16:56.589 05:11:48 version -- common/autotest_common.sh@1693 -- # lcov --version 01:16:56.589 05:11:48 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:16:56.589 05:11:48 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:16:56.589 05:11:48 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:16:56.589 05:11:48 version -- scripts/common.sh@333 -- # local ver1 ver1_l 01:16:56.589 05:11:48 version -- scripts/common.sh@334 -- # local ver2 ver2_l 01:16:56.589 05:11:48 version -- scripts/common.sh@336 -- # IFS=.-: 01:16:56.589 05:11:48 version -- scripts/common.sh@336 -- # read -ra ver1 01:16:56.589 05:11:48 version -- scripts/common.sh@337 -- # IFS=.-: 01:16:56.589 05:11:48 version -- scripts/common.sh@337 -- # read -ra ver2 01:16:56.589 05:11:48 version -- scripts/common.sh@338 -- # local 'op=<' 01:16:56.589 05:11:48 version -- scripts/common.sh@340 -- # ver1_l=2 01:16:56.589 05:11:48 version -- scripts/common.sh@341 -- # ver2_l=1 01:16:56.589 05:11:48 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:16:56.589 05:11:48 version -- scripts/common.sh@344 -- # case "$op" in 01:16:56.589 05:11:48 version -- scripts/common.sh@345 -- # : 1 01:16:56.589 05:11:48 version -- scripts/common.sh@364 -- # (( v = 0 )) 01:16:56.589 05:11:48 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:16:56.589 05:11:48 version -- scripts/common.sh@365 -- # decimal 1 01:16:56.589 05:11:48 version -- scripts/common.sh@353 -- # local d=1 01:16:56.589 05:11:48 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:16:56.589 05:11:48 version -- scripts/common.sh@355 -- # echo 1 01:16:56.589 05:11:48 version -- scripts/common.sh@365 -- # ver1[v]=1 01:16:56.589 05:11:48 version -- scripts/common.sh@366 -- # decimal 2 01:16:56.589 05:11:48 version -- scripts/common.sh@353 -- # local d=2 01:16:56.589 05:11:48 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:16:56.589 05:11:48 version -- scripts/common.sh@355 -- # echo 2 01:16:56.589 05:11:48 version -- scripts/common.sh@366 -- # ver2[v]=2 01:16:56.589 05:11:48 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:16:56.589 05:11:48 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:16:56.589 05:11:48 version -- scripts/common.sh@368 -- # return 0 01:16:56.589 05:11:48 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:16:56.589 05:11:48 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:16:56.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:56.589 --rc genhtml_branch_coverage=1 01:16:56.589 --rc genhtml_function_coverage=1 01:16:56.589 --rc genhtml_legend=1 01:16:56.589 --rc geninfo_all_blocks=1 01:16:56.589 --rc geninfo_unexecuted_blocks=1 01:16:56.589 01:16:56.589 ' 01:16:56.589 05:11:48 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:16:56.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:56.589 --rc genhtml_branch_coverage=1 01:16:56.589 --rc genhtml_function_coverage=1 01:16:56.589 --rc genhtml_legend=1 01:16:56.589 --rc geninfo_all_blocks=1 01:16:56.589 --rc geninfo_unexecuted_blocks=1 01:16:56.589 01:16:56.589 ' 01:16:56.589 05:11:48 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:16:56.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:56.589 --rc genhtml_branch_coverage=1 01:16:56.589 --rc genhtml_function_coverage=1 01:16:56.589 --rc genhtml_legend=1 01:16:56.589 --rc geninfo_all_blocks=1 01:16:56.589 --rc geninfo_unexecuted_blocks=1 01:16:56.589 01:16:56.589 ' 01:16:56.589 05:11:48 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:16:56.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:56.589 --rc genhtml_branch_coverage=1 01:16:56.589 --rc genhtml_function_coverage=1 01:16:56.589 --rc genhtml_legend=1 01:16:56.589 --rc geninfo_all_blocks=1 01:16:56.589 --rc geninfo_unexecuted_blocks=1 01:16:56.589 01:16:56.589 ' 01:16:56.589 05:11:48 version -- app/version.sh@17 -- # get_header_version major 01:16:56.589 05:11:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:16:56.589 05:11:48 version -- app/version.sh@14 -- # tr -d '"' 01:16:56.589 05:11:48 version -- app/version.sh@14 -- # cut -f2 01:16:56.589 05:11:48 version -- app/version.sh@17 -- # major=25 01:16:56.589 05:11:48 version -- app/version.sh@18 -- # get_header_version minor 01:16:56.589 05:11:48 version -- app/version.sh@14 -- # cut -f2 01:16:56.589 05:11:48 version -- app/version.sh@14 -- # tr -d '"' 01:16:56.589 05:11:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:16:56.589 05:11:48 version -- app/version.sh@18 -- # minor=1 01:16:56.589 05:11:48 version -- app/version.sh@19 -- # get_header_version patch 01:16:56.589 05:11:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:16:56.589 05:11:48 version -- app/version.sh@14 -- # cut -f2 01:16:56.589 05:11:48 version -- app/version.sh@14 -- # tr -d '"' 01:16:56.589 05:11:48 version -- app/version.sh@19 -- # patch=0 01:16:56.589 05:11:48 version -- app/version.sh@20 -- # get_header_version suffix 01:16:56.589 05:11:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:16:56.589 05:11:48 version -- app/version.sh@14 -- # cut -f2 01:16:56.589 05:11:48 version -- app/version.sh@14 -- # tr -d '"' 01:16:56.589 05:11:48 version -- app/version.sh@20 -- # suffix=-pre 01:16:56.589 05:11:48 version -- app/version.sh@22 -- # version=25.1 01:16:56.589 05:11:48 version -- app/version.sh@25 -- # (( patch != 0 )) 01:16:56.589 05:11:48 version -- app/version.sh@28 -- # version=25.1rc0 01:16:56.589 05:11:48 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 01:16:56.589 05:11:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 01:16:56.589 05:11:48 version -- app/version.sh@30 -- # py_version=25.1rc0 01:16:56.589 05:11:48 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 01:16:56.589 01:16:56.589 real 0m0.189s 01:16:56.589 user 0m0.121s 01:16:56.589 sys 0m0.099s 01:16:56.589 05:11:48 version -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:56.589 05:11:48 version -- common/autotest_common.sh@10 -- # set +x 01:16:56.589 ************************************ 01:16:56.589 END TEST version 01:16:56.589 ************************************ 01:16:56.589 05:11:48 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 01:16:56.589 05:11:48 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 01:16:56.589 05:11:48 -- spdk/autotest.sh@194 -- # uname -s 01:16:56.589 05:11:48 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 01:16:56.589 05:11:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 01:16:56.589 05:11:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 01:16:56.589 05:11:48 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 01:16:56.589 05:11:48 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 01:16:56.589 05:11:48 -- spdk/autotest.sh@260 -- # timing_exit lib 01:16:56.589 05:11:48 -- common/autotest_common.sh@732 -- # xtrace_disable 01:16:56.589 05:11:48 -- common/autotest_common.sh@10 -- # set +x 01:16:56.847 05:11:48 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 01:16:56.847 05:11:48 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 01:16:56.847 05:11:48 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 01:16:56.847 05:11:48 -- spdk/autotest.sh@277 -- # export NET_TYPE 01:16:56.847 05:11:48 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 01:16:56.847 05:11:48 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 01:16:56.847 05:11:48 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 01:16:56.847 05:11:48 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:16:56.847 05:11:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:56.847 05:11:48 -- common/autotest_common.sh@10 -- # set +x 01:16:56.847 ************************************ 01:16:56.847 START TEST nvmf_tcp 01:16:56.847 ************************************ 01:16:56.847 05:11:48 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 01:16:56.847 * Looking for test storage... 01:16:56.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 01:16:56.847 05:11:48 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:16:56.847 05:11:48 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 01:16:56.847 05:11:48 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:16:56.847 05:11:48 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@345 -- # : 1 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:16:56.847 05:11:48 nvmf_tcp -- scripts/common.sh@368 -- # return 0 01:16:56.847 05:11:48 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:16:56.847 05:11:48 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:16:56.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:56.847 --rc genhtml_branch_coverage=1 01:16:56.847 --rc genhtml_function_coverage=1 01:16:56.847 --rc genhtml_legend=1 01:16:56.847 --rc geninfo_all_blocks=1 01:16:56.847 --rc geninfo_unexecuted_blocks=1 01:16:56.847 01:16:56.847 ' 01:16:56.847 05:11:48 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:16:56.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:56.847 --rc genhtml_branch_coverage=1 01:16:56.847 --rc genhtml_function_coverage=1 01:16:56.847 --rc genhtml_legend=1 01:16:56.847 --rc geninfo_all_blocks=1 01:16:56.847 --rc geninfo_unexecuted_blocks=1 01:16:56.847 01:16:56.847 ' 01:16:56.847 05:11:48 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:16:56.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:56.847 --rc genhtml_branch_coverage=1 01:16:56.847 --rc genhtml_function_coverage=1 01:16:56.847 --rc genhtml_legend=1 01:16:56.847 --rc geninfo_all_blocks=1 01:16:56.847 --rc geninfo_unexecuted_blocks=1 01:16:56.847 01:16:56.847 ' 01:16:56.847 05:11:48 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:16:56.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:56.847 --rc genhtml_branch_coverage=1 01:16:56.847 --rc genhtml_function_coverage=1 01:16:56.847 --rc genhtml_legend=1 01:16:56.847 --rc geninfo_all_blocks=1 01:16:56.847 --rc geninfo_unexecuted_blocks=1 01:16:56.847 01:16:56.847 ' 01:16:56.847 05:11:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 01:16:56.847 05:11:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 01:16:56.847 05:11:48 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 01:16:56.847 05:11:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:16:56.847 05:11:48 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:56.847 05:11:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:16:56.847 ************************************ 01:16:56.847 START TEST nvmf_target_core 01:16:56.847 ************************************ 01:16:56.847 05:11:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 01:16:56.847 * Looking for test storage... 01:16:56.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 01:16:56.847 05:11:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:16:56.847 05:11:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:16:56.848 05:11:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:16:57.106 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:16:57.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:57.107 --rc genhtml_branch_coverage=1 01:16:57.107 --rc genhtml_function_coverage=1 01:16:57.107 --rc genhtml_legend=1 01:16:57.107 --rc geninfo_all_blocks=1 01:16:57.107 --rc geninfo_unexecuted_blocks=1 01:16:57.107 01:16:57.107 ' 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:16:57.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:57.107 --rc genhtml_branch_coverage=1 01:16:57.107 --rc genhtml_function_coverage=1 01:16:57.107 --rc genhtml_legend=1 01:16:57.107 --rc geninfo_all_blocks=1 01:16:57.107 --rc geninfo_unexecuted_blocks=1 01:16:57.107 01:16:57.107 ' 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:16:57.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:57.107 --rc genhtml_branch_coverage=1 01:16:57.107 --rc genhtml_function_coverage=1 01:16:57.107 --rc genhtml_legend=1 01:16:57.107 --rc geninfo_all_blocks=1 01:16:57.107 --rc geninfo_unexecuted_blocks=1 01:16:57.107 01:16:57.107 ' 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:16:57.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:57.107 --rc genhtml_branch_coverage=1 01:16:57.107 --rc genhtml_function_coverage=1 01:16:57.107 --rc genhtml_legend=1 01:16:57.107 --rc geninfo_all_blocks=1 01:16:57.107 --rc geninfo_unexecuted_blocks=1 01:16:57.107 01:16:57.107 ' 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:16:57.107 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:16:57.107 ************************************ 01:16:57.107 START TEST nvmf_abort 01:16:57.107 ************************************ 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 01:16:57.107 * Looking for test storage... 01:16:57.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 01:16:57.107 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:16:57.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:57.108 --rc genhtml_branch_coverage=1 01:16:57.108 --rc genhtml_function_coverage=1 01:16:57.108 --rc genhtml_legend=1 01:16:57.108 --rc geninfo_all_blocks=1 01:16:57.108 --rc geninfo_unexecuted_blocks=1 01:16:57.108 01:16:57.108 ' 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:16:57.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:57.108 --rc genhtml_branch_coverage=1 01:16:57.108 --rc genhtml_function_coverage=1 01:16:57.108 --rc genhtml_legend=1 01:16:57.108 --rc geninfo_all_blocks=1 01:16:57.108 --rc geninfo_unexecuted_blocks=1 01:16:57.108 01:16:57.108 ' 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:16:57.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:57.108 --rc genhtml_branch_coverage=1 01:16:57.108 --rc genhtml_function_coverage=1 01:16:57.108 --rc genhtml_legend=1 01:16:57.108 --rc geninfo_all_blocks=1 01:16:57.108 --rc geninfo_unexecuted_blocks=1 01:16:57.108 01:16:57.108 ' 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:16:57.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:57.108 --rc genhtml_branch_coverage=1 01:16:57.108 --rc genhtml_function_coverage=1 01:16:57.108 --rc genhtml_legend=1 01:16:57.108 --rc geninfo_all_blocks=1 01:16:57.108 --rc geninfo_unexecuted_blocks=1 01:16:57.108 01:16:57.108 ' 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:16:57.108 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:16:57.108 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:16:57.109 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:16:57.109 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:16:57.109 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:16:57.367 Cannot find device "nvmf_init_br" 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:16:57.367 Cannot find device "nvmf_init_br2" 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:16:57.367 Cannot find device "nvmf_tgt_br" 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:16:57.367 Cannot find device "nvmf_tgt_br2" 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:16:57.367 Cannot find device "nvmf_init_br" 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:16:57.367 Cannot find device "nvmf_init_br2" 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:16:57.367 Cannot find device "nvmf_tgt_br" 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:16:57.367 Cannot find device "nvmf_tgt_br2" 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:16:57.367 Cannot find device "nvmf_br" 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:16:57.367 Cannot find device "nvmf_init_if" 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:16:57.367 Cannot find device "nvmf_init_if2" 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:16:57.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:16:57.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:16:57.367 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:16:57.368 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:16:57.368 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:16:57.368 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:16:57.368 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:16:57.368 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:16:57.368 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:16:57.368 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:16:57.368 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:16:57.368 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:16:57.368 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:16:57.368 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:16:57.368 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:16:57.368 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:16:57.368 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:16:57.368 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:16:57.627 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:16:57.627 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:16:57.627 05:11:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:16:57.627 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:16:57.627 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 01:16:57.627 01:16:57.627 --- 10.0.0.3 ping statistics --- 01:16:57.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:16:57.627 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:16:57.627 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:16:57.627 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 01:16:57.627 01:16:57.627 --- 10.0.0.4 ping statistics --- 01:16:57.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:16:57.627 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:16:57.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:16:57.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 01:16:57.627 01:16:57.627 --- 10.0.0.1 ping statistics --- 01:16:57.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:16:57.627 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:16:57.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:16:57.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 01:16:57.627 01:16:57.627 --- 10.0.0.2 ping statistics --- 01:16:57.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:16:57.627 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@461 -- # return 0 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=61145 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 61145 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 61145 ']' 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:16:57.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 01:16:57.627 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:16:57.627 [2024-12-09 05:11:49.124760] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:16:57.627 [2024-12-09 05:11:49.124812] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:16:57.886 [2024-12-09 05:11:49.266450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:16:57.886 [2024-12-09 05:11:49.304441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:16:57.886 [2024-12-09 05:11:49.304478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:16:57.886 [2024-12-09 05:11:49.304485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:16:57.886 [2024-12-09 05:11:49.304490] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:16:57.886 [2024-12-09 05:11:49.304494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:16:57.886 [2024-12-09 05:11:49.305161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:16:57.886 [2024-12-09 05:11:49.305236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:16:57.886 [2024-12-09 05:11:49.305381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:16:58.459 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:16:58.459 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 01:16:58.459 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:16:58.459 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 01:16:58.459 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:16:58.459 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:16:58.459 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 01:16:58.459 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:58.459 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:16:58.459 [2024-12-09 05:11:49.984547] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:16:58.459 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:58.459 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 01:16:58.459 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:58.459 05:11:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:16:58.460 Malloc0 01:16:58.460 05:11:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:58.460 05:11:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:16:58.460 05:11:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:58.460 05:11:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:16:58.460 Delay0 01:16:58.460 05:11:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:58.460 05:11:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:16:58.460 05:11:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:58.460 05:11:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:16:58.460 05:11:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:58.460 05:11:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 01:16:58.460 05:11:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:58.460 05:11:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:16:58.460 05:11:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:58.460 05:11:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:16:58.460 05:11:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:58.460 05:11:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:16:58.460 [2024-12-09 05:11:50.043809] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:16:58.460 05:11:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:58.460 05:11:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:16:58.460 05:11:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:16:58.460 05:11:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:16:58.460 05:11:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:16:58.460 05:11:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 01:16:58.716 [2024-12-09 05:11:50.217189] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 01:17:01.242 Initializing NVMe Controllers 01:17:01.242 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 01:17:01.242 controller IO queue size 128 less than required 01:17:01.242 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 01:17:01.242 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 01:17:01.242 Initialization complete. Launching workers. 01:17:01.242 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41103 01:17:01.242 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41164, failed to submit 62 01:17:01.242 success 41107, unsuccessful 57, failed 0 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:17:01.242 rmmod nvme_tcp 01:17:01.242 rmmod nvme_fabrics 01:17:01.242 rmmod nvme_keyring 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 61145 ']' 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 61145 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 61145 ']' 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 61145 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61145 01:17:01.242 killing process with pid 61145 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61145' 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 61145 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 61145 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:17:01.242 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0 01:17:01.243 ************************************ 01:17:01.243 END TEST nvmf_abort 01:17:01.243 ************************************ 01:17:01.243 01:17:01.243 real 0m4.250s 01:17:01.243 user 0m11.778s 01:17:01.243 sys 0m0.848s 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:17:01.243 ************************************ 01:17:01.243 START TEST nvmf_ns_hotplug_stress 01:17:01.243 ************************************ 01:17:01.243 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 01:17:01.502 * Looking for test storage... 01:17:01.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 01:17:01.502 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:17:01.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:01.503 --rc genhtml_branch_coverage=1 01:17:01.503 --rc genhtml_function_coverage=1 01:17:01.503 --rc genhtml_legend=1 01:17:01.503 --rc geninfo_all_blocks=1 01:17:01.503 --rc geninfo_unexecuted_blocks=1 01:17:01.503 01:17:01.503 ' 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:17:01.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:01.503 --rc genhtml_branch_coverage=1 01:17:01.503 --rc genhtml_function_coverage=1 01:17:01.503 --rc genhtml_legend=1 01:17:01.503 --rc geninfo_all_blocks=1 01:17:01.503 --rc geninfo_unexecuted_blocks=1 01:17:01.503 01:17:01.503 ' 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:17:01.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:01.503 --rc genhtml_branch_coverage=1 01:17:01.503 --rc genhtml_function_coverage=1 01:17:01.503 --rc genhtml_legend=1 01:17:01.503 --rc geninfo_all_blocks=1 01:17:01.503 --rc geninfo_unexecuted_blocks=1 01:17:01.503 01:17:01.503 ' 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:17:01.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:01.503 --rc genhtml_branch_coverage=1 01:17:01.503 --rc genhtml_function_coverage=1 01:17:01.503 --rc genhtml_legend=1 01:17:01.503 --rc geninfo_all_blocks=1 01:17:01.503 --rc geninfo_unexecuted_blocks=1 01:17:01.503 01:17:01.503 ' 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:17:01.503 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:17:01.503 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:17:01.504 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:17:01.504 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:17:01.504 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:17:01.504 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:17:01.504 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:17:01.504 Cannot find device "nvmf_init_br" 01:17:01.504 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 01:17:01.504 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:17:01.504 Cannot find device "nvmf_init_br2" 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:17:01.504 Cannot find device "nvmf_tgt_br" 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:17:01.504 Cannot find device "nvmf_tgt_br2" 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:17:01.504 Cannot find device "nvmf_init_br" 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:17:01.504 Cannot find device "nvmf_init_br2" 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:17:01.504 Cannot find device "nvmf_tgt_br" 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:17:01.504 Cannot find device "nvmf_tgt_br2" 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:17:01.504 Cannot find device "nvmf_br" 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:17:01.504 Cannot find device "nvmf_init_if" 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:17:01.504 Cannot find device "nvmf_init_if2" 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:17:01.504 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:17:01.504 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:17:01.504 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:17:01.762 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:17:01.762 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:17:01.762 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:17:01.762 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:17:01.762 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:17:01.762 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:17:01.762 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:17:01.762 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:17:01.762 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:17:01.762 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:17:01.762 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:17:01.762 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:17:01.762 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:17:01.762 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:17:01.762 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:17:01.763 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:17:01.763 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 01:17:01.763 01:17:01.763 --- 10.0.0.3 ping statistics --- 01:17:01.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:01.763 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:17:01.763 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:17:01.763 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 01:17:01.763 01:17:01.763 --- 10.0.0.4 ping statistics --- 01:17:01.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:01.763 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:17:01.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:17:01.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 01:17:01.763 01:17:01.763 --- 10.0.0.1 ping statistics --- 01:17:01.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:01.763 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:17:01.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:17:01.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 01:17:01.763 01:17:01.763 --- 10.0.0.2 ping statistics --- 01:17:01.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:01.763 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=61456 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 61456 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 61456 ']' 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 01:17:01.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 01:17:01.763 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 01:17:01.763 [2024-12-09 05:11:53.357061] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:17:01.763 [2024-12-09 05:11:53.357123] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:17:02.021 [2024-12-09 05:11:53.496438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:17:02.021 [2024-12-09 05:11:53.530733] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:17:02.021 [2024-12-09 05:11:53.530769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:17:02.021 [2024-12-09 05:11:53.530776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:17:02.021 [2024-12-09 05:11:53.530781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:17:02.021 [2024-12-09 05:11:53.530785] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:17:02.021 [2024-12-09 05:11:53.531436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:17:02.021 [2024-12-09 05:11:53.531554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:17:02.021 [2024-12-09 05:11:53.531554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:17:02.954 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:17:02.954 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 01:17:02.954 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:17:02.954 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 01:17:02.954 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 01:17:02.954 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:17:02.954 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 01:17:02.954 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:17:02.954 [2024-12-09 05:11:54.412327] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:17:02.954 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:17:03.213 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:17:03.213 [2024-12-09 05:11:54.729082] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:17:03.213 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:17:03.471 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 01:17:03.727 Malloc0 01:17:03.727 05:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:17:03.727 Delay0 01:17:03.727 05:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:03.985 05:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 01:17:04.242 NULL1 01:17:04.242 05:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 01:17:04.499 05:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 01:17:04.499 05:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=61587 01:17:04.499 05:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:04.499 05:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:04.499 05:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:04.757 05:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 01:17:04.757 05:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 01:17:05.015 true 01:17:05.015 05:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:05.015 05:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:05.272 05:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:05.529 05:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 01:17:05.529 05:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 01:17:05.529 true 01:17:05.529 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:05.529 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:05.820 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:06.078 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 01:17:06.078 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 01:17:06.335 true 01:17:06.335 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:06.335 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:06.335 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:06.593 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 01:17:06.593 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 01:17:06.851 true 01:17:06.851 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:06.851 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:07.109 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:07.367 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 01:17:07.367 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 01:17:07.367 true 01:17:07.367 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:07.367 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:07.625 05:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:07.884 05:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 01:17:07.884 05:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 01:17:08.142 true 01:17:08.142 05:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:08.142 05:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:08.399 05:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:08.399 05:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 01:17:08.399 05:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 01:17:08.659 true 01:17:08.659 05:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:08.659 05:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:08.918 05:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:09.176 05:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 01:17:09.176 05:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 01:17:09.435 true 01:17:09.435 05:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:09.435 05:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:09.693 05:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:09.951 05:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 01:17:09.951 05:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 01:17:09.951 true 01:17:09.951 05:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:09.951 05:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:10.208 05:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:10.466 05:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 01:17:10.466 05:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 01:17:10.723 true 01:17:10.723 05:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:10.723 05:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:10.980 05:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:11.238 05:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 01:17:11.238 05:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 01:17:11.238 true 01:17:11.495 05:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:11.495 05:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:11.495 05:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:11.754 05:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 01:17:11.754 05:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 01:17:12.012 true 01:17:12.012 05:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:12.012 05:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:12.270 05:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:12.528 05:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 01:17:12.528 05:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 01:17:12.528 true 01:17:12.528 05:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:12.529 05:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:12.786 05:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:13.044 05:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 01:17:13.044 05:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 01:17:13.302 true 01:17:13.302 05:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:13.302 05:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:13.559 05:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:13.559 05:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 01:17:13.559 05:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 01:17:13.817 true 01:17:13.817 05:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:13.817 05:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:14.074 05:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:14.333 05:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 01:17:14.333 05:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 01:17:14.591 true 01:17:14.591 05:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:14.591 05:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:14.591 05:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:14.848 05:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 01:17:14.848 05:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 01:17:15.106 true 01:17:15.106 05:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:15.106 05:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:15.364 05:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:15.621 05:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 01:17:15.621 05:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 01:17:15.621 true 01:17:15.621 05:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:15.621 05:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:15.878 05:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:16.225 05:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 01:17:16.225 05:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 01:17:16.550 true 01:17:16.550 05:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:16.550 05:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:16.551 05:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:16.834 05:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 01:17:16.834 05:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 01:17:16.834 true 01:17:17.092 05:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:17.092 05:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:17.092 05:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:17.350 05:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 01:17:17.350 05:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 01:17:17.608 true 01:17:17.608 05:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:17.608 05:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:17.865 05:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:18.122 05:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 01:17:18.122 05:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 01:17:18.122 true 01:17:18.123 05:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:18.123 05:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:18.379 05:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:18.637 05:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 01:17:18.637 05:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 01:17:18.896 true 01:17:18.896 05:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:18.896 05:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:19.153 05:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:19.410 05:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 01:17:19.410 05:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 01:17:19.410 true 01:17:19.410 05:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:19.410 05:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:19.671 05:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:19.928 05:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 01:17:19.928 05:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 01:17:20.185 true 01:17:20.185 05:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:20.185 05:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:20.442 05:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:20.442 05:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 01:17:20.442 05:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 01:17:20.718 true 01:17:20.718 05:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:20.718 05:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:20.976 05:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:21.235 05:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 01:17:21.235 05:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 01:17:21.235 true 01:17:21.235 05:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:21.235 05:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:21.494 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:21.752 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 01:17:21.752 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 01:17:22.012 true 01:17:22.012 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:22.012 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:22.273 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:22.273 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 01:17:22.273 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 01:17:22.533 true 01:17:22.533 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:22.533 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:22.792 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:23.050 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 01:17:23.050 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 01:17:23.050 true 01:17:23.310 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:23.310 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:23.310 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:23.568 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 01:17:23.568 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 01:17:23.827 true 01:17:23.827 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:23.827 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:24.087 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:24.348 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 01:17:24.348 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 01:17:24.348 true 01:17:24.609 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:24.609 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:24.609 05:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:24.866 05:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 01:17:24.866 05:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 01:17:25.122 true 01:17:25.122 05:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:25.122 05:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:25.377 05:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:25.636 05:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 01:17:25.636 05:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 01:17:25.636 true 01:17:25.636 05:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:25.636 05:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:25.894 05:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:26.202 05:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 01:17:26.202 05:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 01:17:26.482 true 01:17:26.482 05:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:26.482 05:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:26.482 05:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:26.742 05:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 01:17:26.743 05:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 01:17:27.002 true 01:17:27.002 05:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:27.002 05:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:27.262 05:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:27.262 05:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 01:17:27.262 05:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 01:17:27.543 true 01:17:27.543 05:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:27.543 05:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:27.804 05:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:28.065 05:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 01:17:28.065 05:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 01:17:28.326 true 01:17:28.326 05:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:28.326 05:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:28.326 05:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:28.586 05:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 01:17:28.586 05:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 01:17:28.848 true 01:17:28.848 05:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:28.848 05:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:29.109 05:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:29.377 05:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 01:17:29.377 05:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 01:17:29.377 true 01:17:29.377 05:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:29.377 05:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:29.638 05:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:29.900 05:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 01:17:29.900 05:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 01:17:29.900 true 01:17:29.900 05:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:29.900 05:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:30.161 05:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:30.422 05:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 01:17:30.422 05:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 01:17:30.422 true 01:17:30.422 05:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:30.422 05:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:30.683 05:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:30.945 05:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 01:17:30.945 05:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 01:17:31.206 true 01:17:31.206 05:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:31.206 05:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:31.468 05:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:31.468 05:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 01:17:31.468 05:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 01:17:31.729 true 01:17:31.729 05:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:31.729 05:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:31.987 05:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:32.243 05:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 01:17:32.243 05:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 01:17:32.499 true 01:17:32.499 05:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:32.500 05:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:32.500 05:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:32.757 05:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 01:17:32.757 05:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 01:17:33.014 true 01:17:33.014 05:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:33.014 05:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:33.272 05:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:33.272 05:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 01:17:33.272 05:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 01:17:33.529 true 01:17:33.529 05:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:33.529 05:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:33.785 05:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:34.043 05:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 01:17:34.043 05:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 01:17:34.300 true 01:17:34.301 05:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:34.301 05:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:34.558 05:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:34.559 Initializing NVMe Controllers 01:17:34.559 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:17:34.559 Controller IO queue size 128, less than required. 01:17:34.559 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:17:34.559 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:17:34.559 Initialization complete. Launching workers. 01:17:34.559 ======================================================== 01:17:34.559 Latency(us) 01:17:34.559 Device Information : IOPS MiB/s Average min max 01:17:34.559 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 28410.23 13.87 4505.30 2942.50 14563.88 01:17:34.559 ======================================================== 01:17:34.559 Total : 28410.23 13.87 4505.30 2942.50 14563.88 01:17:34.559 01:17:34.559 05:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 01:17:34.559 05:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 01:17:34.816 true 01:17:34.816 05:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61587 01:17:34.816 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (61587) - No such process 01:17:34.816 05:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 61587 01:17:34.816 05:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:35.072 05:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:17:35.329 05:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 01:17:35.329 05:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 01:17:35.329 05:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 01:17:35.329 05:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:17:35.329 05:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 01:17:35.329 null0 01:17:35.586 05:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:17:35.586 05:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:17:35.586 05:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 01:17:35.586 null1 01:17:35.586 05:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:17:35.586 05:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:17:35.586 05:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 01:17:35.844 null2 01:17:35.844 05:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:17:35.844 05:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:17:35.844 05:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 01:17:36.102 null3 01:17:36.102 05:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:17:36.102 05:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:17:36.102 05:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 01:17:36.360 null4 01:17:36.360 05:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:17:36.360 05:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:17:36.360 05:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 01:17:36.360 null5 01:17:36.360 05:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:17:36.360 05:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:17:36.360 05:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 01:17:36.618 null6 01:17:36.618 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:17:36.618 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:17:36.618 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 01:17:36.877 null7 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 01:17:36.877 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 63033 63034 63036 63038 63040 63043 63044 63046 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:36.878 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:17:37.137 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:17:37.137 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:17:37.137 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:17:37.137 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:17:37.137 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:17:37.137 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:17:37.137 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:17:37.137 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:37.137 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:37.137 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:37.137 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:17:37.137 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:37.137 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:37.137 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:17:37.396 05:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:17:37.654 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:17:37.654 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:37.654 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:17:37.654 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:37.654 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:37.654 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:17:37.654 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:37.654 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:37.654 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:17:37.654 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:37.654 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:37.654 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:17:37.654 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:37.654 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:37.654 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:17:37.654 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:37.654 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:37.654 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:17:37.912 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:17:37.912 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:37.912 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:37.912 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:17:37.912 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:37.912 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:37.912 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:17:37.912 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:37.912 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:37.912 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:17:37.912 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:17:37.912 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:17:37.912 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:17:37.912 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:37.912 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:37.912 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:17:37.912 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:17:38.170 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:17:38.170 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:17:38.170 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:38.170 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:38.170 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:38.170 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:17:38.170 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:17:38.170 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:38.170 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:38.170 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:17:38.170 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:38.170 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:38.170 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:17:38.170 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:38.170 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:38.170 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:17:38.170 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:38.170 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:38.170 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:17:38.170 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:38.170 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:38.170 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:17:38.428 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:17:38.428 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:17:38.428 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:38.428 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:38.428 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:17:38.428 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:38.428 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:38.428 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:17:38.428 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:17:38.428 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:17:38.428 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:17:38.428 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:17:38.428 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:17:38.428 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:38.428 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:38.428 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:17:38.686 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:38.686 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:38.686 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:17:38.686 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:38.686 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:38.686 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:38.686 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:17:38.686 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:38.686 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:38.686 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:17:38.686 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:38.686 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:38.687 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:17:38.687 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:38.687 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:38.687 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:38.687 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:38.687 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:17:38.687 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:17:38.687 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:17:38.687 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:17:38.945 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:38.945 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:38.945 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:17:38.945 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:17:38.945 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:17:38.945 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:17:38.945 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:17:38.945 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:17:38.945 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:38.945 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:38.945 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:17:38.945 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:38.945 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:38.945 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:17:38.945 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:39.204 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:39.204 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:39.204 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:17:39.204 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:39.204 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:39.204 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:17:39.204 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:39.204 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:39.204 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:17:39.204 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:39.204 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:39.204 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:17:39.204 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:17:39.204 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:17:39.204 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:39.204 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:39.204 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:17:39.204 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:39.204 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:39.204 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:17:39.204 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:17:39.463 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:17:39.463 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:17:39.463 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:39.463 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:39.463 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:17:39.463 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:39.463 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:39.463 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:17:39.463 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:17:39.463 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:39.463 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:17:39.463 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:39.463 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:39.463 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:17:39.463 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:39.463 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:39.463 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:17:39.463 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:39.463 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:39.463 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:17:39.721 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:17:39.721 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:39.721 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:39.721 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:17:39.721 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:17:39.721 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:39.721 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:39.721 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:17:39.721 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:39.721 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:39.721 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:17:39.721 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:17:39.721 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:17:39.722 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:17:39.722 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:39.722 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:39.722 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:17:39.722 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:39.980 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:39.980 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:39.980 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:17:39.980 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:17:39.980 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:17:39.980 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:39.980 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:39.980 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:17:39.980 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:39.980 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:39.980 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:17:39.980 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:17:39.980 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:39.980 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:39.980 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:17:39.980 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:39.980 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:39.980 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:17:39.980 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:17:40.238 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:40.238 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:40.238 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:17:40.238 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:40.238 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:17:40.238 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:40.238 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:40.238 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:17:40.238 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:17:40.238 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:40.238 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:40.238 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:17:40.238 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:17:40.238 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:17:40.238 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:40.238 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:40.238 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:17:40.238 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:40.238 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:40.238 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:17:40.497 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:40.497 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:40.497 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:17:40.497 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:17:40.497 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:40.497 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:40.497 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:17:40.497 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:17:40.497 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:40.497 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:40.497 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:17:40.497 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:40.497 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:40.497 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:17:40.497 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:17:40.497 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:17:40.755 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:40.755 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:40.755 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:17:40.755 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:40.755 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:40.755 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:40.755 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:17:40.755 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:17:40.755 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:17:40.755 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:40.755 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:40.755 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:17:40.755 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:40.755 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:40.755 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:17:40.755 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:17:41.013 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:41.013 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:41.013 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:17:41.013 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:17:41.013 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:41.013 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:41.013 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:17:41.013 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:41.013 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:41.013 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:17:41.013 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:41.013 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:41.013 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:17:41.013 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:17:41.013 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:41.013 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:41.013 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:17:41.013 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:17:41.013 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:17:41.272 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:41.272 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:41.272 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:17:41.272 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:17:41.272 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:41.272 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:41.272 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:17:41.272 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:41.272 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:41.272 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:41.272 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:41.272 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:41.272 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:41.530 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:41.530 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:41.530 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:17:41.530 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:17:41.530 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 01:17:41.530 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 01:17:41.530 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 01:17:41.530 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 01:17:41.530 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:17:41.530 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 01:17:41.530 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 01:17:41.530 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:17:41.530 rmmod nvme_tcp 01:17:41.530 rmmod nvme_fabrics 01:17:41.530 rmmod nvme_keyring 01:17:41.530 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:17:41.530 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 01:17:41.530 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 01:17:41.530 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 61456 ']' 01:17:41.530 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 61456 01:17:41.530 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 61456 ']' 01:17:41.530 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 61456 01:17:41.530 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 01:17:41.530 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:17:41.530 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61456 01:17:41.530 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:17:41.530 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:17:41.530 killing process with pid 61456 01:17:41.530 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61456' 01:17:41.530 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 61456 01:17:41.530 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 61456 01:17:41.789 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:17:41.789 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:17:41.789 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:17:41.789 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 01:17:41.789 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 01:17:41.789 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:17:41.789 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 01:17:41.789 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:17:41.789 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:17:41.789 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:17:41.789 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:17:41.789 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:17:41.789 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:17:41.789 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:17:41.789 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:17:41.789 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:17:41.789 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:17:41.789 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:17:41.789 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:17:41.789 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:17:41.789 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:17:41.789 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 01:17:42.050 01:17:42.050 real 0m40.604s 01:17:42.050 user 3m19.914s 01:17:42.050 sys 0m10.861s 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 01:17:42.050 ************************************ 01:17:42.050 END TEST nvmf_ns_hotplug_stress 01:17:42.050 ************************************ 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:17:42.050 ************************************ 01:17:42.050 START TEST nvmf_delete_subsystem 01:17:42.050 ************************************ 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 01:17:42.050 * Looking for test storage... 01:17:42.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:17:42.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:42.050 --rc genhtml_branch_coverage=1 01:17:42.050 --rc genhtml_function_coverage=1 01:17:42.050 --rc genhtml_legend=1 01:17:42.050 --rc geninfo_all_blocks=1 01:17:42.050 --rc geninfo_unexecuted_blocks=1 01:17:42.050 01:17:42.050 ' 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:17:42.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:42.050 --rc genhtml_branch_coverage=1 01:17:42.050 --rc genhtml_function_coverage=1 01:17:42.050 --rc genhtml_legend=1 01:17:42.050 --rc geninfo_all_blocks=1 01:17:42.050 --rc geninfo_unexecuted_blocks=1 01:17:42.050 01:17:42.050 ' 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:17:42.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:42.050 --rc genhtml_branch_coverage=1 01:17:42.050 --rc genhtml_function_coverage=1 01:17:42.050 --rc genhtml_legend=1 01:17:42.050 --rc geninfo_all_blocks=1 01:17:42.050 --rc geninfo_unexecuted_blocks=1 01:17:42.050 01:17:42.050 ' 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:17:42.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:42.050 --rc genhtml_branch_coverage=1 01:17:42.050 --rc genhtml_function_coverage=1 01:17:42.050 --rc genhtml_legend=1 01:17:42.050 --rc geninfo_all_blocks=1 01:17:42.050 --rc geninfo_unexecuted_blocks=1 01:17:42.050 01:17:42.050 ' 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:17:42.050 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:17:42.051 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:17:42.051 Cannot find device "nvmf_init_br" 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:17:42.051 Cannot find device "nvmf_init_br2" 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 01:17:42.051 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:17:42.309 Cannot find device "nvmf_tgt_br" 01:17:42.309 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 01:17:42.309 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:17:42.309 Cannot find device "nvmf_tgt_br2" 01:17:42.309 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 01:17:42.309 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:17:42.309 Cannot find device "nvmf_init_br" 01:17:42.309 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 01:17:42.309 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:17:42.309 Cannot find device "nvmf_init_br2" 01:17:42.309 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 01:17:42.309 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:17:42.309 Cannot find device "nvmf_tgt_br" 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:17:42.310 Cannot find device "nvmf_tgt_br2" 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:17:42.310 Cannot find device "nvmf_br" 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:17:42.310 Cannot find device "nvmf_init_if" 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:17:42.310 Cannot find device "nvmf_init_if2" 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:17:42.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:17:42.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:17:42.310 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:17:42.310 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 01:17:42.310 01:17:42.310 --- 10.0.0.3 ping statistics --- 01:17:42.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:42.310 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:17:42.310 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:17:42.310 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 01:17:42.310 01:17:42.310 --- 10.0.0.4 ping statistics --- 01:17:42.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:42.310 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:17:42.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:17:42.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 01:17:42.310 01:17:42.310 --- 10.0.0.1 ping statistics --- 01:17:42.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:42.310 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:17:42.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:17:42.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 01:17:42.310 01:17:42.310 --- 10.0.0.2 ping statistics --- 01:17:42.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:42.310 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=64414 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 64414 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 64414 ']' 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:17:42.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 01:17:42.310 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:17:42.566 [2024-12-09 05:12:33.956644] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:17:42.567 [2024-12-09 05:12:33.956697] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:17:42.567 [2024-12-09 05:12:34.090001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:17:42.567 [2024-12-09 05:12:34.123878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:17:42.567 [2024-12-09 05:12:34.124017] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:17:42.567 [2024-12-09 05:12:34.124059] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:17:42.567 [2024-12-09 05:12:34.124097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:17:42.567 [2024-12-09 05:12:34.124130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:17:42.567 [2024-12-09 05:12:34.124821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:17:42.567 [2024-12-09 05:12:34.125046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:17:43.505 [2024-12-09 05:12:34.891820] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:17:43.505 [2024-12-09 05:12:34.907914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:17:43.505 NULL1 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:17:43.505 Delay0 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=64465 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 01:17:43.505 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 01:17:43.505 [2024-12-09 05:12:35.092536] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:17:45.402 05:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:17:45.402 05:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:45.402 05:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 starting I/O failed: -6 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 starting I/O failed: -6 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 starting I/O failed: -6 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 starting I/O failed: -6 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 starting I/O failed: -6 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 starting I/O failed: -6 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 starting I/O failed: -6 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 starting I/O failed: -6 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 starting I/O failed: -6 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 starting I/O failed: -6 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 starting I/O failed: -6 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 [2024-12-09 05:12:37.123981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da5c30 is same with the state(6) to be set 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 [2024-12-09 05:12:37.124333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da67e0 is same with the state(6) to be set 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 starting I/O failed: -6 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 starting I/O failed: -6 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 starting I/O failed: -6 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 starting I/O failed: -6 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 starting I/O failed: -6 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 starting I/O failed: -6 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 starting I/O failed: -6 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 starting I/O failed: -6 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 starting I/O failed: -6 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 starting I/O failed: -6 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 [2024-12-09 05:12:37.127051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff138000c40 is same with the state(6) to be set 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Read completed with error (sct=0, sc=8) 01:17:45.662 Write completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Write completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Write completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Write completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 Write completed with error (sct=0, sc=8) 01:17:45.663 Write completed with error (sct=0, sc=8) 01:17:45.663 Read completed with error (sct=0, sc=8) 01:17:45.663 [2024-12-09 05:12:37.127657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff13800d4d0 is same with the state(6) to be set 01:17:46.598 [2024-12-09 05:12:38.105705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9aaa0 is same with the state(6) to be set 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 [2024-12-09 05:12:38.125181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da5a50 is same with the state(6) to be set 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 [2024-12-09 05:12:38.125294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da8ea0 is same with the state(6) to be set 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 [2024-12-09 05:12:38.126405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff13800d800 is same with the state(6) to be set 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Write completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 Read completed with error (sct=0, sc=8) 01:17:46.598 [2024-12-09 05:12:38.126517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff13800d020 is same with the state(6) to be set 01:17:46.598 Initializing NVMe Controllers 01:17:46.598 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:17:46.598 Controller IO queue size 128, less than required. 01:17:46.598 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:17:46.598 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 01:17:46.598 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 01:17:46.598 Initialization complete. Launching workers. 01:17:46.598 ======================================================== 01:17:46.598 Latency(us) 01:17:46.598 Device Information : IOPS MiB/s Average min max 01:17:46.598 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.68 0.08 909903.54 384.18 1008518.62 01:17:46.598 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.23 0.07 953213.61 632.27 2000852.42 01:17:46.598 ======================================================== 01:17:46.598 Total : 316.91 0.15 930844.67 384.18 2000852.42 01:17:46.598 01:17:46.598 [2024-12-09 05:12:38.127488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9aaa0 (9): Bad file descriptor 01:17:46.598 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:46.598 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 01:17:46.598 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 64465 01:17:46.598 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 01:17:46.598 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 64465 01:17:47.165 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (64465) - No such process 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 64465 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 64465 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 64465 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:17:47.165 [2024-12-09 05:12:38.651797] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=64511 01:17:47.165 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 01:17:47.166 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64511 01:17:47.166 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:17:47.166 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 01:17:47.424 [2024-12-09 05:12:38.806704] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:17:47.682 05:12:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:17:47.682 05:12:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64511 01:17:47.682 05:12:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:17:48.251 05:12:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:17:48.252 05:12:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64511 01:17:48.252 05:12:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:17:48.817 05:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:17:48.817 05:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64511 01:17:48.817 05:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:17:49.074 05:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:17:49.074 05:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64511 01:17:49.074 05:12:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:17:49.639 05:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:17:49.639 05:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64511 01:17:49.639 05:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:17:50.204 05:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:17:50.204 05:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64511 01:17:50.204 05:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:17:50.462 Initializing NVMe Controllers 01:17:50.462 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:17:50.462 Controller IO queue size 128, less than required. 01:17:50.462 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:17:50.462 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 01:17:50.462 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 01:17:50.462 Initialization complete. Launching workers. 01:17:50.462 ======================================================== 01:17:50.462 Latency(us) 01:17:50.462 Device Information : IOPS MiB/s Average min max 01:17:50.462 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001798.82 1000070.30 1006042.29 01:17:50.462 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002863.12 1000085.54 1008480.04 01:17:50.462 ======================================================== 01:17:50.462 Total : 256.00 0.12 1002330.97 1000070.30 1008480.04 01:17:50.462 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64511 01:17:50.719 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (64511) - No such process 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 64511 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:17:50.719 rmmod nvme_tcp 01:17:50.719 rmmod nvme_fabrics 01:17:50.719 rmmod nvme_keyring 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 64414 ']' 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 64414 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 64414 ']' 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 64414 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64414 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:17:50.719 killing process with pid 64414 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64414' 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 64414 01:17:50.719 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 64414 01:17:50.976 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:17:50.976 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:17:50.976 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:17:50.976 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 01:17:50.976 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 01:17:50.976 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:17:50.976 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 01:17:50.976 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:17:50.976 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:17:50.976 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:17:50.976 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:17:50.976 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:17:50.976 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:17:50.976 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:17:50.976 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:17:50.976 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:17:50.976 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:17:50.976 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:17:50.976 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:17:50.976 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 01:17:51.234 01:17:51.234 real 0m9.186s 01:17:51.234 user 0m28.572s 01:17:51.234 sys 0m1.218s 01:17:51.234 ************************************ 01:17:51.234 END TEST nvmf_delete_subsystem 01:17:51.234 ************************************ 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:17:51.234 ************************************ 01:17:51.234 START TEST nvmf_host_management 01:17:51.234 ************************************ 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 01:17:51.234 * Looking for test storage... 01:17:51.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:17:51.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:51.234 --rc genhtml_branch_coverage=1 01:17:51.234 --rc genhtml_function_coverage=1 01:17:51.234 --rc genhtml_legend=1 01:17:51.234 --rc geninfo_all_blocks=1 01:17:51.234 --rc geninfo_unexecuted_blocks=1 01:17:51.234 01:17:51.234 ' 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:17:51.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:51.234 --rc genhtml_branch_coverage=1 01:17:51.234 --rc genhtml_function_coverage=1 01:17:51.234 --rc genhtml_legend=1 01:17:51.234 --rc geninfo_all_blocks=1 01:17:51.234 --rc geninfo_unexecuted_blocks=1 01:17:51.234 01:17:51.234 ' 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:17:51.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:51.234 --rc genhtml_branch_coverage=1 01:17:51.234 --rc genhtml_function_coverage=1 01:17:51.234 --rc genhtml_legend=1 01:17:51.234 --rc geninfo_all_blocks=1 01:17:51.234 --rc geninfo_unexecuted_blocks=1 01:17:51.234 01:17:51.234 ' 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:17:51.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:51.234 --rc genhtml_branch_coverage=1 01:17:51.234 --rc genhtml_function_coverage=1 01:17:51.234 --rc genhtml_legend=1 01:17:51.234 --rc geninfo_all_blocks=1 01:17:51.234 --rc geninfo_unexecuted_blocks=1 01:17:51.234 01:17:51.234 ' 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:17:51.234 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:17:51.493 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:17:51.493 Cannot find device "nvmf_init_br" 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:17:51.493 Cannot find device "nvmf_init_br2" 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:17:51.493 Cannot find device "nvmf_tgt_br" 01:17:51.493 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:17:51.494 Cannot find device "nvmf_tgt_br2" 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:17:51.494 Cannot find device "nvmf_init_br" 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:17:51.494 Cannot find device "nvmf_init_br2" 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:17:51.494 Cannot find device "nvmf_tgt_br" 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:17:51.494 Cannot find device "nvmf_tgt_br2" 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:17:51.494 Cannot find device "nvmf_br" 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:17:51.494 Cannot find device "nvmf_init_if" 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:17:51.494 Cannot find device "nvmf_init_if2" 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:17:51.494 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:17:51.494 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:17:51.494 05:12:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:17:51.494 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:17:51.494 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:17:51.494 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:17:51.494 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:17:51.494 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:17:51.494 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:17:51.494 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:17:51.494 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:17:51.494 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:17:51.494 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:17:51.494 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:17:51.494 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:17:51.494 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:17:51.494 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:17:51.752 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:17:51.752 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 01:17:51.752 01:17:51.752 --- 10.0.0.3 ping statistics --- 01:17:51.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:51.752 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:17:51.752 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:17:51.752 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 01:17:51.752 01:17:51.752 --- 10.0.0.4 ping statistics --- 01:17:51.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:51.752 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:17:51.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:17:51.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 01:17:51.752 01:17:51.752 --- 10.0.0.1 ping statistics --- 01:17:51.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:51.752 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:17:51.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:17:51.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 01:17:51.752 01:17:51.752 --- 10.0.0.2 ping statistics --- 01:17:51.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:51.752 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=64796 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 64796 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 64796 ']' 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 01:17:51.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 01:17:51.752 05:12:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:51.752 [2024-12-09 05:12:43.249444] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:17:51.752 [2024-12-09 05:12:43.249504] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:17:52.009 [2024-12-09 05:12:43.390353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:17:52.009 [2024-12-09 05:12:43.426948] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:17:52.009 [2024-12-09 05:12:43.426987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:17:52.009 [2024-12-09 05:12:43.426994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:17:52.009 [2024-12-09 05:12:43.426999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:17:52.009 [2024-12-09 05:12:43.427004] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:17:52.009 [2024-12-09 05:12:43.427677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:17:52.009 [2024-12-09 05:12:43.428678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:17:52.009 [2024-12-09 05:12:43.429000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:17:52.009 [2024-12-09 05:12:43.429093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:17:52.574 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:17:52.574 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 01:17:52.574 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:17:52.574 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 01:17:52.574 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:52.574 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:17:52.574 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:17:52.574 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:52.574 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:52.574 [2024-12-09 05:12:44.163356] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:17:52.574 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:52.574 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 01:17:52.574 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 01:17:52.574 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:52.574 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 01:17:52.574 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 01:17:52.574 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 01:17:52.574 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:52.574 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:52.862 Malloc0 01:17:52.862 [2024-12-09 05:12:44.232421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:17:52.862 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:52.862 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 01:17:52.862 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 01:17:52.862 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:52.862 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=64868 01:17:52.862 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 64868 /var/tmp/bdevperf.sock 01:17:52.862 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 64868 ']' 01:17:52.862 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:17:52.862 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 01:17:52.862 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:17:52.862 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 01:17:52.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:17:52.862 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 01:17:52.862 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:52.862 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 01:17:52.862 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 01:17:52.862 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 01:17:52.862 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:17:52.862 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:17:52.862 { 01:17:52.862 "params": { 01:17:52.862 "name": "Nvme$subsystem", 01:17:52.862 "trtype": "$TEST_TRANSPORT", 01:17:52.862 "traddr": "$NVMF_FIRST_TARGET_IP", 01:17:52.862 "adrfam": "ipv4", 01:17:52.862 "trsvcid": "$NVMF_PORT", 01:17:52.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:17:52.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:17:52.862 "hdgst": ${hdgst:-false}, 01:17:52.862 "ddgst": ${ddgst:-false} 01:17:52.862 }, 01:17:52.862 "method": "bdev_nvme_attach_controller" 01:17:52.862 } 01:17:52.862 EOF 01:17:52.862 )") 01:17:52.862 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 01:17:52.862 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 01:17:52.862 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 01:17:52.862 05:12:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:17:52.862 "params": { 01:17:52.862 "name": "Nvme0", 01:17:52.862 "trtype": "tcp", 01:17:52.862 "traddr": "10.0.0.3", 01:17:52.862 "adrfam": "ipv4", 01:17:52.862 "trsvcid": "4420", 01:17:52.862 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:17:52.862 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:17:52.862 "hdgst": false, 01:17:52.862 "ddgst": false 01:17:52.862 }, 01:17:52.862 "method": "bdev_nvme_attach_controller" 01:17:52.862 }' 01:17:52.862 [2024-12-09 05:12:44.309189] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:17:52.862 [2024-12-09 05:12:44.309245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64868 ] 01:17:52.862 [2024-12-09 05:12:44.448150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:53.120 [2024-12-09 05:12:44.483868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:53.120 Running I/O for 10 seconds... 01:17:53.689 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:17:53.689 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 01:17:53.689 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 01:17:53.689 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:53.689 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:53.689 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:53.689 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:17:53.689 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 01:17:53.689 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 01:17:53.689 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 01:17:53.689 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 01:17:53.689 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 01:17:53.690 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 01:17:53.690 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 01:17:53.690 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 01:17:53.690 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 01:17:53.690 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:53.690 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:53.690 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:53.690 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1258 01:17:53.690 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1258 -ge 100 ']' 01:17:53.690 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 01:17:53.690 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 01:17:53.690 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 01:17:53.690 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 01:17:53.690 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:53.690 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:53.690 [2024-12-09 05:12:45.241238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb80d0 is same with the state(6) to be set 01:17:53.690 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:53.690 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 01:17:53.690 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:53.690 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:53.690 [2024-12-09 05:12:45.249989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:17:53.690 [2024-12-09 05:12:45.250024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.250034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:17:53.690 [2024-12-09 05:12:45.250041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.250048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:17:53.690 [2024-12-09 05:12:45.250053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.250060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:17:53.690 [2024-12-09 05:12:45.250066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.250071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bc660 is same with the state(6) to be set 01:17:53.690 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:53.690 05:12:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 01:17:53.690 task offset: 40960 on job bdev=Nvme0n1 fails 01:17:53.690 01:17:53.690 Latency(us) 01:17:53.690 [2024-12-09T05:12:45.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:53.690 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 01:17:53.690 Job: Nvme0n1 ended in about 0.63 seconds with error 01:17:53.690 Verification LBA range: start 0x0 length 0x400 01:17:53.690 Nvme0n1 : 0.63 2120.52 132.53 100.98 0.00 28129.37 1241.40 26214.40 01:17:53.690 [2024-12-09T05:12:45.307Z] =================================================================================================================== 01:17:53.690 [2024-12-09T05:12:45.307Z] Total : 2120.52 132.53 100.98 0.00 28129.37 1241.40 26214.40 01:17:53.690 [2024-12-09 05:12:45.257629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.257991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.257996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.690 [2024-12-09 05:12:45.258366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.690 [2024-12-09 05:12:45.258371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.691 [2024-12-09 05:12:45.258379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.691 [2024-12-09 05:12:45.258384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.691 [2024-12-09 05:12:45.258391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.691 [2024-12-09 05:12:45.258397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.691 [2024-12-09 05:12:45.258404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.691 [2024-12-09 05:12:45.258410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.691 [2024-12-09 05:12:45.258417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.691 [2024-12-09 05:12:45.258422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.691 [2024-12-09 05:12:45.258430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.691 [2024-12-09 05:12:45.258435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.691 [2024-12-09 05:12:45.258442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.691 [2024-12-09 05:12:45.258448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.691 [2024-12-09 05:12:45.258455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.691 [2024-12-09 05:12:45.258461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.691 [2024-12-09 05:12:45.258468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.691 [2024-12-09 05:12:45.258473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.691 [2024-12-09 05:12:45.258481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.691 [2024-12-09 05:12:45.258487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.691 [2024-12-09 05:12:45.258495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:53.691 [2024-12-09 05:12:45.258501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:53.691 [2024-12-09 05:12:45.259640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:17:53.691 [2024-12-09 05:12:45.261466] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:17:53.691 [2024-12-09 05:12:45.261484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bc660 (9): Bad file descriptor 01:17:53.691 [2024-12-09 05:12:45.264313] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 01:17:55.066 05:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 64868 01:17:55.066 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64868) - No such process 01:17:55.066 05:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 01:17:55.066 05:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 01:17:55.066 05:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 01:17:55.066 05:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 01:17:55.066 05:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 01:17:55.066 05:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 01:17:55.066 05:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:17:55.066 05:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:17:55.066 { 01:17:55.066 "params": { 01:17:55.066 "name": "Nvme$subsystem", 01:17:55.066 "trtype": "$TEST_TRANSPORT", 01:17:55.066 "traddr": "$NVMF_FIRST_TARGET_IP", 01:17:55.066 "adrfam": "ipv4", 01:17:55.066 "trsvcid": "$NVMF_PORT", 01:17:55.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:17:55.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:17:55.066 "hdgst": ${hdgst:-false}, 01:17:55.066 "ddgst": ${ddgst:-false} 01:17:55.066 }, 01:17:55.066 "method": "bdev_nvme_attach_controller" 01:17:55.066 } 01:17:55.066 EOF 01:17:55.066 )") 01:17:55.066 05:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 01:17:55.066 05:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 01:17:55.066 05:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 01:17:55.066 05:12:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:17:55.066 "params": { 01:17:55.066 "name": "Nvme0", 01:17:55.066 "trtype": "tcp", 01:17:55.066 "traddr": "10.0.0.3", 01:17:55.066 "adrfam": "ipv4", 01:17:55.066 "trsvcid": "4420", 01:17:55.066 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:17:55.066 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:17:55.066 "hdgst": false, 01:17:55.066 "ddgst": false 01:17:55.066 }, 01:17:55.066 "method": "bdev_nvme_attach_controller" 01:17:55.066 }' 01:17:55.066 [2024-12-09 05:12:46.292298] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:17:55.066 [2024-12-09 05:12:46.292349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64917 ] 01:17:55.066 [2024-12-09 05:12:46.425125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:55.066 [2024-12-09 05:12:46.455210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:55.066 Running I/O for 1 seconds... 01:17:56.000 2678.00 IOPS, 167.38 MiB/s 01:17:56.000 Latency(us) 01:17:56.000 [2024-12-09T05:12:47.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:56.000 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 01:17:56.000 Verification LBA range: start 0x0 length 0x400 01:17:56.000 Nvme0n1 : 1.02 2709.89 169.37 0.00 0.00 23239.32 2722.26 21878.94 01:17:56.000 [2024-12-09T05:12:47.617Z] =================================================================================================================== 01:17:56.000 [2024-12-09T05:12:47.617Z] Total : 2709.89 169.37 0.00 0.00 23239.32 2722.26 21878.94 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:17:56.258 rmmod nvme_tcp 01:17:56.258 rmmod nvme_fabrics 01:17:56.258 rmmod nvme_keyring 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 64796 ']' 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 64796 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 64796 ']' 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 64796 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64796 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:17:56.258 killing process with pid 64796 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64796' 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 64796 01:17:56.258 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 64796 01:17:56.517 [2024-12-09 05:12:47.978539] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 01:17:56.517 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:17:56.517 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:17:56.517 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:17:56.517 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 01:17:56.517 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 01:17:56.517 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:17:56.517 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 01:17:56.517 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:17:56.517 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:17:56.517 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:17:56.517 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:17:56.517 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:17:56.517 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:17:56.517 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:17:56.517 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:17:56.517 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:17:56.517 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:17:56.517 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:17:56.780 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:17:56.780 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:17:56.780 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:17:56.780 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:17:56.780 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 01:17:56.780 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:56.780 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:17:56.780 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:56.780 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 01:17:56.780 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 01:17:56.780 01:17:56.780 real 0m5.542s 01:17:56.780 user 0m20.795s 01:17:56.780 sys 0m1.069s 01:17:56.780 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:56.780 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:56.780 ************************************ 01:17:56.780 END TEST nvmf_host_management 01:17:56.780 ************************************ 01:17:56.780 05:12:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 01:17:56.780 05:12:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:17:56.780 05:12:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:56.780 05:12:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:17:56.780 ************************************ 01:17:56.780 START TEST nvmf_lvol 01:17:56.780 ************************************ 01:17:56.780 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 01:17:56.780 * Looking for test storage... 01:17:56.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:17:56.780 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:17:56.780 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 01:17:56.780 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:17:57.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:57.046 --rc genhtml_branch_coverage=1 01:17:57.046 --rc genhtml_function_coverage=1 01:17:57.046 --rc genhtml_legend=1 01:17:57.046 --rc geninfo_all_blocks=1 01:17:57.046 --rc geninfo_unexecuted_blocks=1 01:17:57.046 01:17:57.046 ' 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:17:57.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:57.046 --rc genhtml_branch_coverage=1 01:17:57.046 --rc genhtml_function_coverage=1 01:17:57.046 --rc genhtml_legend=1 01:17:57.046 --rc geninfo_all_blocks=1 01:17:57.046 --rc geninfo_unexecuted_blocks=1 01:17:57.046 01:17:57.046 ' 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:17:57.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:57.046 --rc genhtml_branch_coverage=1 01:17:57.046 --rc genhtml_function_coverage=1 01:17:57.046 --rc genhtml_legend=1 01:17:57.046 --rc geninfo_all_blocks=1 01:17:57.046 --rc geninfo_unexecuted_blocks=1 01:17:57.046 01:17:57.046 ' 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:17:57.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:57.046 --rc genhtml_branch_coverage=1 01:17:57.046 --rc genhtml_function_coverage=1 01:17:57.046 --rc genhtml_legend=1 01:17:57.046 --rc geninfo_all_blocks=1 01:17:57.046 --rc geninfo_unexecuted_blocks=1 01:17:57.046 01:17:57.046 ' 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:17:57.046 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:17:57.046 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:17:57.047 Cannot find device "nvmf_init_br" 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:17:57.047 Cannot find device "nvmf_init_br2" 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:17:57.047 Cannot find device "nvmf_tgt_br" 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:17:57.047 Cannot find device "nvmf_tgt_br2" 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:17:57.047 Cannot find device "nvmf_init_br" 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:17:57.047 Cannot find device "nvmf_init_br2" 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:17:57.047 Cannot find device "nvmf_tgt_br" 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:17:57.047 Cannot find device "nvmf_tgt_br2" 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:17:57.047 Cannot find device "nvmf_br" 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:17:57.047 Cannot find device "nvmf_init_if" 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:17:57.047 Cannot find device "nvmf_init_if2" 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:17:57.047 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:17:57.047 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:17:57.047 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:17:57.306 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:17:57.306 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 01:17:57.306 01:17:57.306 --- 10.0.0.3 ping statistics --- 01:17:57.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:57.306 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:17:57.306 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:17:57.306 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 01:17:57.306 01:17:57.306 --- 10.0.0.4 ping statistics --- 01:17:57.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:57.306 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:17:57.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:17:57.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 01:17:57.306 01:17:57.306 --- 10.0.0.1 ping statistics --- 01:17:57.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:57.306 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:17:57.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:17:57.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 01:17:57.306 01:17:57.306 --- 10.0.0.2 ping statistics --- 01:17:57.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:57.306 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=65178 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 65178 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 65178 ']' 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 01:17:57.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 01:17:57.306 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:17:57.306 [2024-12-09 05:12:48.786917] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:17:57.306 [2024-12-09 05:12:48.787191] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:17:57.306 [2024-12-09 05:12:48.920292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:17:57.565 [2024-12-09 05:12:48.949406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:17:57.565 [2024-12-09 05:12:48.949442] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:17:57.565 [2024-12-09 05:12:48.949447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:17:57.565 [2024-12-09 05:12:48.949451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:17:57.565 [2024-12-09 05:12:48.949454] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:17:57.565 [2024-12-09 05:12:48.950204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:17:57.565 [2024-12-09 05:12:48.950283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:57.565 [2024-12-09 05:12:48.950286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:17:58.131 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:17:58.131 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 01:17:58.131 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:17:58.131 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 01:17:58.131 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:17:58.131 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:17:58.131 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:17:58.389 [2024-12-09 05:12:49.831192] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:17:58.389 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:17:58.648 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 01:17:58.648 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:17:58.908 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 01:17:58.908 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 01:17:58.908 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 01:17:59.169 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=67922f7d-9baf-44e1-92ca-44481556ab04 01:17:59.169 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 67922f7d-9baf-44e1-92ca-44481556ab04 lvol 20 01:17:59.430 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=34e15905-4592-484c-a39b-49ce75f2250c 01:17:59.430 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:17:59.690 05:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 34e15905-4592-484c-a39b-49ce75f2250c 01:17:59.690 05:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:17:59.951 [2024-12-09 05:12:51.480333] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:17:59.951 05:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:18:00.212 05:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 01:18:00.212 05:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65319 01:18:00.212 05:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 01:18:01.177 05:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 34e15905-4592-484c-a39b-49ce75f2250c MY_SNAPSHOT 01:18:01.436 05:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f349bc4f-c287-4149-804b-f1f0267098c5 01:18:01.436 05:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 34e15905-4592-484c-a39b-49ce75f2250c 30 01:18:01.695 05:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone f349bc4f-c287-4149-804b-f1f0267098c5 MY_CLONE 01:18:01.956 05:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=df1f71d8-0347-4af6-9095-d3352111b78d 01:18:01.956 05:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate df1f71d8-0347-4af6-9095-d3352111b78d 01:18:02.215 05:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65319 01:18:10.441 Initializing NVMe Controllers 01:18:10.441 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 01:18:10.441 Controller IO queue size 128, less than required. 01:18:10.441 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:18:10.441 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 01:18:10.441 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 01:18:10.441 Initialization complete. Launching workers. 01:18:10.441 ======================================================== 01:18:10.441 Latency(us) 01:18:10.441 Device Information : IOPS MiB/s Average min max 01:18:10.441 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16509.00 64.49 7755.19 2219.89 39696.09 01:18:10.441 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16652.30 65.05 7689.12 2546.76 37861.17 01:18:10.441 ======================================================== 01:18:10.441 Total : 33161.30 129.54 7722.01 2219.89 39696.09 01:18:10.441 01:18:10.441 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:18:10.705 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 34e15905-4592-484c-a39b-49ce75f2250c 01:18:10.964 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 67922f7d-9baf-44e1-92ca-44481556ab04 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:18:11.222 rmmod nvme_tcp 01:18:11.222 rmmod nvme_fabrics 01:18:11.222 rmmod nvme_keyring 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 65178 ']' 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 65178 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 65178 ']' 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 65178 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65178 01:18:11.222 killing process with pid 65178 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65178' 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 65178 01:18:11.222 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 65178 01:18:11.482 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:18:11.482 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:18:11.482 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:18:11.482 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 01:18:11.482 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 01:18:11.482 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:18:11.482 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 01:18:11.482 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:18:11.482 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:18:11.482 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:18:11.482 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:18:11.483 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:18:11.483 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:18:11.483 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:18:11.483 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:18:11.483 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:18:11.483 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:18:11.483 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:18:11.483 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:18:11.483 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:18:11.483 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:18:11.483 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:18:11.483 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 01:18:11.483 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:18:11.483 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:18:11.483 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:18:11.743 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 01:18:11.743 01:18:11.743 real 0m14.831s 01:18:11.743 user 1m2.750s 01:18:11.743 sys 0m2.970s 01:18:11.743 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:11.743 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:18:11.743 ************************************ 01:18:11.743 END TEST nvmf_lvol 01:18:11.743 ************************************ 01:18:11.743 05:13:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 01:18:11.743 05:13:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:18:11.743 05:13:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:11.743 05:13:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:18:11.743 ************************************ 01:18:11.743 START TEST nvmf_lvs_grow 01:18:11.743 ************************************ 01:18:11.743 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 01:18:11.743 * Looking for test storage... 01:18:11.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:18:11.743 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:18:11.743 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:18:11.743 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 01:18:11.743 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:18:11.743 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:18:11.743 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 01:18:11.743 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 01:18:11.743 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 01:18:11.743 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 01:18:11.743 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 01:18:11.743 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:18:11.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:11.744 --rc genhtml_branch_coverage=1 01:18:11.744 --rc genhtml_function_coverage=1 01:18:11.744 --rc genhtml_legend=1 01:18:11.744 --rc geninfo_all_blocks=1 01:18:11.744 --rc geninfo_unexecuted_blocks=1 01:18:11.744 01:18:11.744 ' 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:18:11.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:11.744 --rc genhtml_branch_coverage=1 01:18:11.744 --rc genhtml_function_coverage=1 01:18:11.744 --rc genhtml_legend=1 01:18:11.744 --rc geninfo_all_blocks=1 01:18:11.744 --rc geninfo_unexecuted_blocks=1 01:18:11.744 01:18:11.744 ' 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:18:11.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:11.744 --rc genhtml_branch_coverage=1 01:18:11.744 --rc genhtml_function_coverage=1 01:18:11.744 --rc genhtml_legend=1 01:18:11.744 --rc geninfo_all_blocks=1 01:18:11.744 --rc geninfo_unexecuted_blocks=1 01:18:11.744 01:18:11.744 ' 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:18:11.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:11.744 --rc genhtml_branch_coverage=1 01:18:11.744 --rc genhtml_function_coverage=1 01:18:11.744 --rc genhtml_legend=1 01:18:11.744 --rc geninfo_all_blocks=1 01:18:11.744 --rc geninfo_unexecuted_blocks=1 01:18:11.744 01:18:11.744 ' 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:18:11.744 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:18:11.744 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:18:11.745 Cannot find device "nvmf_init_br" 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:18:11.745 Cannot find device "nvmf_init_br2" 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:18:11.745 Cannot find device "nvmf_tgt_br" 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:18:11.745 Cannot find device "nvmf_tgt_br2" 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:18:11.745 Cannot find device "nvmf_init_br" 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 01:18:11.745 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:18:12.006 Cannot find device "nvmf_init_br2" 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:18:12.006 Cannot find device "nvmf_tgt_br" 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:18:12.006 Cannot find device "nvmf_tgt_br2" 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:18:12.006 Cannot find device "nvmf_br" 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:18:12.006 Cannot find device "nvmf_init_if" 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:18:12.006 Cannot find device "nvmf_init_if2" 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:18:12.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:18:12.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:18:12.006 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:18:12.006 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 01:18:12.006 01:18:12.006 --- 10.0.0.3 ping statistics --- 01:18:12.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:12.006 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:18:12.006 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:18:12.006 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 01:18:12.006 01:18:12.006 --- 10.0.0.4 ping statistics --- 01:18:12.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:12.006 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:18:12.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:18:12.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 01:18:12.006 01:18:12.006 --- 10.0.0.1 ping statistics --- 01:18:12.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:12.006 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:18:12.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:18:12.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 01:18:12.006 01:18:12.006 --- 10.0.0.2 ping statistics --- 01:18:12.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:12.006 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=65726 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 65726 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 65726 ']' 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:18:12.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:18:12.006 05:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 01:18:12.266 [2024-12-09 05:13:03.654658] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:18:12.266 [2024-12-09 05:13:03.654712] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:18:12.266 [2024-12-09 05:13:03.792761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:12.266 [2024-12-09 05:13:03.836151] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:18:12.266 [2024-12-09 05:13:03.836191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:18:12.266 [2024-12-09 05:13:03.836197] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:18:12.266 [2024-12-09 05:13:03.836203] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:18:12.266 [2024-12-09 05:13:03.836207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:18:12.266 [2024-12-09 05:13:03.836453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:13.201 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:18:13.202 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 01:18:13.202 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:18:13.202 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 01:18:13.202 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:18:13.202 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:18:13.202 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:18:13.202 [2024-12-09 05:13:04.677554] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:18:13.202 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 01:18:13.202 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:18:13.202 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:13.202 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:18:13.202 ************************************ 01:18:13.202 START TEST lvs_grow_clean 01:18:13.202 ************************************ 01:18:13.202 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 01:18:13.202 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 01:18:13.202 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 01:18:13.202 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 01:18:13.202 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 01:18:13.202 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 01:18:13.202 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 01:18:13.202 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:18:13.202 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:18:13.202 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:18:13.459 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 01:18:13.459 05:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 01:18:13.724 05:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=80d287e1-de3f-41a1-aacc-d0828496130d 01:18:13.724 05:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80d287e1-de3f-41a1-aacc-d0828496130d 01:18:13.724 05:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 01:18:13.981 05:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 01:18:13.981 05:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 01:18:13.981 05:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80d287e1-de3f-41a1-aacc-d0828496130d lvol 150 01:18:14.239 05:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=93031954-8a27-408a-bb00-85e8bf22dd6e 01:18:14.239 05:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:18:14.239 05:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 01:18:14.497 [2024-12-09 05:13:05.869442] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 01:18:14.497 [2024-12-09 05:13:05.869497] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 01:18:14.497 true 01:18:14.497 05:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80d287e1-de3f-41a1-aacc-d0828496130d 01:18:14.497 05:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 01:18:14.497 05:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 01:18:14.497 05:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:18:14.757 05:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 93031954-8a27-408a-bb00-85e8bf22dd6e 01:18:15.015 05:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:18:15.273 [2024-12-09 05:13:06.705833] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:18:15.273 05:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:18:15.531 05:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 01:18:15.531 05:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65888 01:18:15.531 05:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:18:15.531 05:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65888 /var/tmp/bdevperf.sock 01:18:15.531 05:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 65888 ']' 01:18:15.531 05:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:18:15.531 05:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:18:15.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:18:15.531 05:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:18:15.531 05:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:18:15.531 05:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 01:18:15.531 [2024-12-09 05:13:06.963077] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:18:15.531 [2024-12-09 05:13:06.963138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65888 ] 01:18:15.531 [2024-12-09 05:13:07.099301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:15.531 [2024-12-09 05:13:07.135273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:18:15.791 05:13:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:18:15.791 05:13:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 01:18:15.791 05:13:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 01:18:16.051 Nvme0n1 01:18:16.051 05:13:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 01:18:16.051 [ 01:18:16.051 { 01:18:16.051 "aliases": [ 01:18:16.051 "93031954-8a27-408a-bb00-85e8bf22dd6e" 01:18:16.051 ], 01:18:16.051 "assigned_rate_limits": { 01:18:16.051 "r_mbytes_per_sec": 0, 01:18:16.051 "rw_ios_per_sec": 0, 01:18:16.051 "rw_mbytes_per_sec": 0, 01:18:16.051 "w_mbytes_per_sec": 0 01:18:16.051 }, 01:18:16.051 "block_size": 4096, 01:18:16.051 "claimed": false, 01:18:16.051 "driver_specific": { 01:18:16.051 "mp_policy": "active_passive", 01:18:16.051 "nvme": [ 01:18:16.051 { 01:18:16.051 "ctrlr_data": { 01:18:16.051 "ana_reporting": false, 01:18:16.051 "cntlid": 1, 01:18:16.051 "firmware_revision": "25.01", 01:18:16.051 "model_number": "SPDK bdev Controller", 01:18:16.051 "multi_ctrlr": true, 01:18:16.051 "oacs": { 01:18:16.051 "firmware": 0, 01:18:16.051 "format": 0, 01:18:16.051 "ns_manage": 0, 01:18:16.051 "security": 0 01:18:16.051 }, 01:18:16.051 "serial_number": "SPDK0", 01:18:16.051 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:18:16.051 "vendor_id": "0x8086" 01:18:16.051 }, 01:18:16.051 "ns_data": { 01:18:16.051 "can_share": true, 01:18:16.051 "id": 1 01:18:16.051 }, 01:18:16.051 "trid": { 01:18:16.051 "adrfam": "IPv4", 01:18:16.051 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:18:16.051 "traddr": "10.0.0.3", 01:18:16.051 "trsvcid": "4420", 01:18:16.051 "trtype": "TCP" 01:18:16.051 }, 01:18:16.051 "vs": { 01:18:16.051 "nvme_version": "1.3" 01:18:16.051 } 01:18:16.051 } 01:18:16.051 ] 01:18:16.051 }, 01:18:16.051 "memory_domains": [ 01:18:16.051 { 01:18:16.051 "dma_device_id": "system", 01:18:16.051 "dma_device_type": 1 01:18:16.051 } 01:18:16.051 ], 01:18:16.051 "name": "Nvme0n1", 01:18:16.051 "num_blocks": 38912, 01:18:16.051 "numa_id": -1, 01:18:16.051 "product_name": "NVMe disk", 01:18:16.051 "supported_io_types": { 01:18:16.051 "abort": true, 01:18:16.051 "compare": true, 01:18:16.051 "compare_and_write": true, 01:18:16.051 "copy": true, 01:18:16.051 "flush": true, 01:18:16.051 "get_zone_info": false, 01:18:16.051 "nvme_admin": true, 01:18:16.051 "nvme_io": true, 01:18:16.051 "nvme_io_md": false, 01:18:16.051 "nvme_iov_md": false, 01:18:16.051 "read": true, 01:18:16.051 "reset": true, 01:18:16.051 "seek_data": false, 01:18:16.051 "seek_hole": false, 01:18:16.051 "unmap": true, 01:18:16.051 "write": true, 01:18:16.051 "write_zeroes": true, 01:18:16.051 "zcopy": false, 01:18:16.051 "zone_append": false, 01:18:16.051 "zone_management": false 01:18:16.051 }, 01:18:16.051 "uuid": "93031954-8a27-408a-bb00-85e8bf22dd6e", 01:18:16.051 "zoned": false 01:18:16.051 } 01:18:16.051 ] 01:18:16.309 05:13:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65916 01:18:16.309 05:13:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 01:18:16.309 05:13:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:18:16.309 Running I/O for 10 seconds... 01:18:17.250 Latency(us) 01:18:17.250 [2024-12-09T05:13:08.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:17.250 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:17.250 Nvme0n1 : 1.00 13279.00 51.87 0.00 0.00 0.00 0.00 0.00 01:18:17.250 [2024-12-09T05:13:08.867Z] =================================================================================================================== 01:18:17.250 [2024-12-09T05:13:08.867Z] Total : 13279.00 51.87 0.00 0.00 0.00 0.00 0.00 01:18:17.250 01:18:18.187 05:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 80d287e1-de3f-41a1-aacc-d0828496130d 01:18:18.188 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:18.188 Nvme0n1 : 2.00 12538.00 48.98 0.00 0.00 0.00 0.00 0.00 01:18:18.188 [2024-12-09T05:13:09.805Z] =================================================================================================================== 01:18:18.188 [2024-12-09T05:13:09.805Z] Total : 12538.00 48.98 0.00 0.00 0.00 0.00 0.00 01:18:18.188 01:18:18.445 true 01:18:18.445 05:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80d287e1-de3f-41a1-aacc-d0828496130d 01:18:18.445 05:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 01:18:18.705 05:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 01:18:18.705 05:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 01:18:18.705 05:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65916 01:18:19.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:19.274 Nvme0n1 : 3.00 11393.00 44.50 0.00 0.00 0.00 0.00 0.00 01:18:19.274 [2024-12-09T05:13:10.891Z] =================================================================================================================== 01:18:19.274 [2024-12-09T05:13:10.891Z] Total : 11393.00 44.50 0.00 0.00 0.00 0.00 0.00 01:18:19.274 01:18:20.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:20.207 Nvme0n1 : 4.00 11756.00 45.92 0.00 0.00 0.00 0.00 0.00 01:18:20.207 [2024-12-09T05:13:11.824Z] =================================================================================================================== 01:18:20.207 [2024-12-09T05:13:11.824Z] Total : 11756.00 45.92 0.00 0.00 0.00 0.00 0.00 01:18:20.207 01:18:21.139 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:21.139 Nvme0n1 : 5.00 11774.00 45.99 0.00 0.00 0.00 0.00 0.00 01:18:21.139 [2024-12-09T05:13:12.756Z] =================================================================================================================== 01:18:21.139 [2024-12-09T05:13:12.756Z] Total : 11774.00 45.99 0.00 0.00 0.00 0.00 0.00 01:18:21.139 01:18:22.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:22.557 Nvme0n1 : 6.00 11300.17 44.14 0.00 0.00 0.00 0.00 0.00 01:18:22.557 [2024-12-09T05:13:14.174Z] =================================================================================================================== 01:18:22.557 [2024-12-09T05:13:14.174Z] Total : 11300.17 44.14 0.00 0.00 0.00 0.00 0.00 01:18:22.557 01:18:23.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:23.489 Nvme0n1 : 7.00 11094.43 43.34 0.00 0.00 0.00 0.00 0.00 01:18:23.489 [2024-12-09T05:13:15.106Z] =================================================================================================================== 01:18:23.489 [2024-12-09T05:13:15.106Z] Total : 11094.43 43.34 0.00 0.00 0.00 0.00 0.00 01:18:23.489 01:18:24.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:24.420 Nvme0n1 : 8.00 10942.50 42.74 0.00 0.00 0.00 0.00 0.00 01:18:24.420 [2024-12-09T05:13:16.037Z] =================================================================================================================== 01:18:24.420 [2024-12-09T05:13:16.037Z] Total : 10942.50 42.74 0.00 0.00 0.00 0.00 0.00 01:18:24.420 01:18:25.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:25.370 Nvme0n1 : 9.00 10811.44 42.23 0.00 0.00 0.00 0.00 0.00 01:18:25.370 [2024-12-09T05:13:16.987Z] =================================================================================================================== 01:18:25.370 [2024-12-09T05:13:16.987Z] Total : 10811.44 42.23 0.00 0.00 0.00 0.00 0.00 01:18:25.370 01:18:26.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:26.305 Nvme0n1 : 10.00 10927.60 42.69 0.00 0.00 0.00 0.00 0.00 01:18:26.305 [2024-12-09T05:13:17.922Z] =================================================================================================================== 01:18:26.305 [2024-12-09T05:13:17.922Z] Total : 10927.60 42.69 0.00 0.00 0.00 0.00 0.00 01:18:26.305 01:18:26.305 01:18:26.305 Latency(us) 01:18:26.305 [2024-12-09T05:13:17.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:26.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:26.305 Nvme0n1 : 10.01 10930.66 42.70 0.00 0.00 11706.47 4612.73 135508.28 01:18:26.305 [2024-12-09T05:13:17.922Z] =================================================================================================================== 01:18:26.305 [2024-12-09T05:13:17.922Z] Total : 10930.66 42.70 0.00 0.00 11706.47 4612.73 135508.28 01:18:26.305 { 01:18:26.305 "results": [ 01:18:26.305 { 01:18:26.305 "job": "Nvme0n1", 01:18:26.305 "core_mask": "0x2", 01:18:26.305 "workload": "randwrite", 01:18:26.305 "status": "finished", 01:18:26.305 "queue_depth": 128, 01:18:26.305 "io_size": 4096, 01:18:26.305 "runtime": 10.008911, 01:18:26.305 "iops": 10930.659689150998, 01:18:26.305 "mibps": 42.697889410746086, 01:18:26.305 "io_failed": 0, 01:18:26.305 "io_timeout": 0, 01:18:26.305 "avg_latency_us": 11706.47405089956, 01:18:26.305 "min_latency_us": 4612.7261538461535, 01:18:26.305 "max_latency_us": 135508.28307692308 01:18:26.305 } 01:18:26.305 ], 01:18:26.305 "core_count": 1 01:18:26.305 } 01:18:26.305 05:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65888 01:18:26.305 05:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 65888 ']' 01:18:26.305 05:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 65888 01:18:26.305 05:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 01:18:26.305 05:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:18:26.305 05:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65888 01:18:26.305 05:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:18:26.305 05:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:18:26.305 killing process with pid 65888 01:18:26.305 05:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65888' 01:18:26.305 05:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 65888 01:18:26.305 Received shutdown signal, test time was about 10.000000 seconds 01:18:26.305 01:18:26.305 Latency(us) 01:18:26.305 [2024-12-09T05:13:17.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:26.305 [2024-12-09T05:13:17.922Z] =================================================================================================================== 01:18:26.305 [2024-12-09T05:13:17.922Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:18:26.305 05:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 65888 01:18:26.564 05:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:18:26.564 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:18:26.822 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80d287e1-de3f-41a1-aacc-d0828496130d 01:18:26.822 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 01:18:27.081 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 01:18:27.081 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 01:18:27.081 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:18:27.341 [2024-12-09 05:13:18.790250] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 01:18:27.341 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80d287e1-de3f-41a1-aacc-d0828496130d 01:18:27.341 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 01:18:27.341 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80d287e1-de3f-41a1-aacc-d0828496130d 01:18:27.341 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:18:27.341 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:18:27.341 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:18:27.341 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:18:27.341 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:18:27.341 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:18:27.341 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:18:27.341 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:18:27.341 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80d287e1-de3f-41a1-aacc-d0828496130d 01:18:27.603 2024/12/09 05:13:19 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:80d287e1-de3f-41a1-aacc-d0828496130d], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 01:18:27.603 request: 01:18:27.603 { 01:18:27.603 "method": "bdev_lvol_get_lvstores", 01:18:27.603 "params": { 01:18:27.603 "uuid": "80d287e1-de3f-41a1-aacc-d0828496130d" 01:18:27.603 } 01:18:27.603 } 01:18:27.603 Got JSON-RPC error response 01:18:27.603 GoRPCClient: error on JSON-RPC call 01:18:27.603 05:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 01:18:27.603 05:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:18:27.603 05:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:18:27.603 05:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:18:27.603 05:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:18:27.863 aio_bdev 01:18:27.863 05:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 93031954-8a27-408a-bb00-85e8bf22dd6e 01:18:27.863 05:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=93031954-8a27-408a-bb00-85e8bf22dd6e 01:18:27.863 05:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:18:27.863 05:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 01:18:27.863 05:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:18:27.863 05:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:18:27.863 05:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:18:27.863 05:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 93031954-8a27-408a-bb00-85e8bf22dd6e -t 2000 01:18:28.122 [ 01:18:28.122 { 01:18:28.122 "aliases": [ 01:18:28.122 "lvs/lvol" 01:18:28.122 ], 01:18:28.122 "assigned_rate_limits": { 01:18:28.122 "r_mbytes_per_sec": 0, 01:18:28.122 "rw_ios_per_sec": 0, 01:18:28.122 "rw_mbytes_per_sec": 0, 01:18:28.122 "w_mbytes_per_sec": 0 01:18:28.122 }, 01:18:28.122 "block_size": 4096, 01:18:28.122 "claimed": false, 01:18:28.122 "driver_specific": { 01:18:28.122 "lvol": { 01:18:28.122 "base_bdev": "aio_bdev", 01:18:28.122 "clone": false, 01:18:28.122 "esnap_clone": false, 01:18:28.122 "lvol_store_uuid": "80d287e1-de3f-41a1-aacc-d0828496130d", 01:18:28.122 "num_allocated_clusters": 38, 01:18:28.122 "snapshot": false, 01:18:28.122 "thin_provision": false 01:18:28.122 } 01:18:28.122 }, 01:18:28.122 "name": "93031954-8a27-408a-bb00-85e8bf22dd6e", 01:18:28.122 "num_blocks": 38912, 01:18:28.122 "product_name": "Logical Volume", 01:18:28.122 "supported_io_types": { 01:18:28.122 "abort": false, 01:18:28.122 "compare": false, 01:18:28.122 "compare_and_write": false, 01:18:28.122 "copy": false, 01:18:28.122 "flush": false, 01:18:28.122 "get_zone_info": false, 01:18:28.122 "nvme_admin": false, 01:18:28.122 "nvme_io": false, 01:18:28.122 "nvme_io_md": false, 01:18:28.122 "nvme_iov_md": false, 01:18:28.122 "read": true, 01:18:28.122 "reset": true, 01:18:28.122 "seek_data": true, 01:18:28.122 "seek_hole": true, 01:18:28.122 "unmap": true, 01:18:28.122 "write": true, 01:18:28.122 "write_zeroes": true, 01:18:28.122 "zcopy": false, 01:18:28.122 "zone_append": false, 01:18:28.122 "zone_management": false 01:18:28.122 }, 01:18:28.122 "uuid": "93031954-8a27-408a-bb00-85e8bf22dd6e", 01:18:28.122 "zoned": false 01:18:28.122 } 01:18:28.122 ] 01:18:28.122 05:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 01:18:28.122 05:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80d287e1-de3f-41a1-aacc-d0828496130d 01:18:28.122 05:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 01:18:28.382 05:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 01:18:28.382 05:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80d287e1-de3f-41a1-aacc-d0828496130d 01:18:28.382 05:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 01:18:28.643 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 01:18:28.643 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 93031954-8a27-408a-bb00-85e8bf22dd6e 01:18:28.903 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 80d287e1-de3f-41a1-aacc-d0828496130d 01:18:29.164 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:18:29.164 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:18:29.735 ************************************ 01:18:29.735 END TEST lvs_grow_clean 01:18:29.735 ************************************ 01:18:29.735 01:18:29.735 real 0m16.349s 01:18:29.735 user 0m15.611s 01:18:29.736 sys 0m1.650s 01:18:29.736 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:29.736 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 01:18:29.736 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 01:18:29.736 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:18:29.736 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:29.736 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:18:29.736 ************************************ 01:18:29.736 START TEST lvs_grow_dirty 01:18:29.736 ************************************ 01:18:29.736 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 01:18:29.736 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 01:18:29.736 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 01:18:29.736 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 01:18:29.736 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 01:18:29.736 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 01:18:29.736 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 01:18:29.736 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:18:29.736 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:18:29.736 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:18:29.736 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 01:18:29.736 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 01:18:29.996 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f4b7864d-3bc2-4f80-a765-ea77abaaddfe 01:18:29.996 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 01:18:29.996 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4b7864d-3bc2-4f80-a765-ea77abaaddfe 01:18:30.257 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 01:18:30.257 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 01:18:30.257 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f4b7864d-3bc2-4f80-a765-ea77abaaddfe lvol 150 01:18:30.517 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c2b4abda-52dc-4de2-8566-83ecf1c5f038 01:18:30.517 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:18:30.517 05:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 01:18:30.778 [2024-12-09 05:13:22.139652] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 01:18:30.778 [2024-12-09 05:13:22.139705] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 01:18:30.778 true 01:18:30.778 05:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 01:18:30.778 05:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4b7864d-3bc2-4f80-a765-ea77abaaddfe 01:18:30.778 05:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 01:18:30.778 05:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:18:31.039 05:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c2b4abda-52dc-4de2-8566-83ecf1c5f038 01:18:31.039 05:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:18:31.299 [2024-12-09 05:13:22.791942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:18:31.299 05:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:18:31.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:18:31.559 05:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 01:18:31.559 05:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66305 01:18:31.559 05:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:18:31.559 05:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66305 /var/tmp/bdevperf.sock 01:18:31.559 05:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 66305 ']' 01:18:31.559 05:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:18:31.559 05:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 01:18:31.559 05:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:18:31.559 05:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 01:18:31.559 05:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:18:31.559 [2024-12-09 05:13:23.052110] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:18:31.559 [2024-12-09 05:13:23.052173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66305 ] 01:18:31.820 [2024-12-09 05:13:23.183309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:31.820 [2024-12-09 05:13:23.218686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:18:32.389 05:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:18:32.389 05:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 01:18:32.389 05:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 01:18:32.651 Nvme0n1 01:18:32.651 05:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 01:18:32.912 [ 01:18:32.912 { 01:18:32.912 "aliases": [ 01:18:32.912 "c2b4abda-52dc-4de2-8566-83ecf1c5f038" 01:18:32.912 ], 01:18:32.912 "assigned_rate_limits": { 01:18:32.912 "r_mbytes_per_sec": 0, 01:18:32.912 "rw_ios_per_sec": 0, 01:18:32.912 "rw_mbytes_per_sec": 0, 01:18:32.912 "w_mbytes_per_sec": 0 01:18:32.912 }, 01:18:32.912 "block_size": 4096, 01:18:32.912 "claimed": false, 01:18:32.912 "driver_specific": { 01:18:32.912 "mp_policy": "active_passive", 01:18:32.912 "nvme": [ 01:18:32.912 { 01:18:32.912 "ctrlr_data": { 01:18:32.912 "ana_reporting": false, 01:18:32.912 "cntlid": 1, 01:18:32.912 "firmware_revision": "25.01", 01:18:32.912 "model_number": "SPDK bdev Controller", 01:18:32.912 "multi_ctrlr": true, 01:18:32.912 "oacs": { 01:18:32.912 "firmware": 0, 01:18:32.912 "format": 0, 01:18:32.912 "ns_manage": 0, 01:18:32.912 "security": 0 01:18:32.912 }, 01:18:32.912 "serial_number": "SPDK0", 01:18:32.912 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:18:32.912 "vendor_id": "0x8086" 01:18:32.912 }, 01:18:32.912 "ns_data": { 01:18:32.912 "can_share": true, 01:18:32.912 "id": 1 01:18:32.912 }, 01:18:32.912 "trid": { 01:18:32.912 "adrfam": "IPv4", 01:18:32.912 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:18:32.912 "traddr": "10.0.0.3", 01:18:32.912 "trsvcid": "4420", 01:18:32.912 "trtype": "TCP" 01:18:32.912 }, 01:18:32.912 "vs": { 01:18:32.912 "nvme_version": "1.3" 01:18:32.912 } 01:18:32.912 } 01:18:32.912 ] 01:18:32.912 }, 01:18:32.912 "memory_domains": [ 01:18:32.912 { 01:18:32.912 "dma_device_id": "system", 01:18:32.912 "dma_device_type": 1 01:18:32.912 } 01:18:32.912 ], 01:18:32.912 "name": "Nvme0n1", 01:18:32.912 "num_blocks": 38912, 01:18:32.912 "numa_id": -1, 01:18:32.912 "product_name": "NVMe disk", 01:18:32.912 "supported_io_types": { 01:18:32.912 "abort": true, 01:18:32.912 "compare": true, 01:18:32.912 "compare_and_write": true, 01:18:32.912 "copy": true, 01:18:32.912 "flush": true, 01:18:32.912 "get_zone_info": false, 01:18:32.912 "nvme_admin": true, 01:18:32.912 "nvme_io": true, 01:18:32.912 "nvme_io_md": false, 01:18:32.912 "nvme_iov_md": false, 01:18:32.912 "read": true, 01:18:32.912 "reset": true, 01:18:32.912 "seek_data": false, 01:18:32.912 "seek_hole": false, 01:18:32.912 "unmap": true, 01:18:32.912 "write": true, 01:18:32.912 "write_zeroes": true, 01:18:32.912 "zcopy": false, 01:18:32.912 "zone_append": false, 01:18:32.912 "zone_management": false 01:18:32.912 }, 01:18:32.912 "uuid": "c2b4abda-52dc-4de2-8566-83ecf1c5f038", 01:18:32.912 "zoned": false 01:18:32.912 } 01:18:32.912 ] 01:18:32.912 05:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66354 01:18:32.912 05:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 01:18:32.912 05:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:18:32.912 Running I/O for 10 seconds... 01:18:33.851 Latency(us) 01:18:33.851 [2024-12-09T05:13:25.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:33.851 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:33.851 Nvme0n1 : 1.00 13936.00 54.44 0.00 0.00 0.00 0.00 0.00 01:18:33.851 [2024-12-09T05:13:25.468Z] =================================================================================================================== 01:18:33.851 [2024-12-09T05:13:25.468Z] Total : 13936.00 54.44 0.00 0.00 0.00 0.00 0.00 01:18:33.851 01:18:34.793 05:13:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f4b7864d-3bc2-4f80-a765-ea77abaaddfe 01:18:35.054 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:35.054 Nvme0n1 : 2.00 13839.00 54.06 0.00 0.00 0.00 0.00 0.00 01:18:35.054 [2024-12-09T05:13:26.671Z] =================================================================================================================== 01:18:35.054 [2024-12-09T05:13:26.671Z] Total : 13839.00 54.06 0.00 0.00 0.00 0.00 0.00 01:18:35.054 01:18:35.054 true 01:18:35.054 05:13:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 01:18:35.054 05:13:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4b7864d-3bc2-4f80-a765-ea77abaaddfe 01:18:35.315 05:13:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 01:18:35.315 05:13:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 01:18:35.315 05:13:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66354 01:18:35.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:35.887 Nvme0n1 : 3.00 13611.67 53.17 0.00 0.00 0.00 0.00 0.00 01:18:35.887 [2024-12-09T05:13:27.504Z] =================================================================================================================== 01:18:35.887 [2024-12-09T05:13:27.504Z] Total : 13611.67 53.17 0.00 0.00 0.00 0.00 0.00 01:18:35.887 01:18:37.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:37.274 Nvme0n1 : 4.00 13409.00 52.38 0.00 0.00 0.00 0.00 0.00 01:18:37.274 [2024-12-09T05:13:28.891Z] =================================================================================================================== 01:18:37.274 [2024-12-09T05:13:28.891Z] Total : 13409.00 52.38 0.00 0.00 0.00 0.00 0.00 01:18:37.274 01:18:38.216 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:38.216 Nvme0n1 : 5.00 12682.20 49.54 0.00 0.00 0.00 0.00 0.00 01:18:38.216 [2024-12-09T05:13:29.833Z] =================================================================================================================== 01:18:38.217 [2024-12-09T05:13:29.834Z] Total : 12682.20 49.54 0.00 0.00 0.00 0.00 0.00 01:18:38.217 01:18:39.159 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:39.159 Nvme0n1 : 6.00 12191.83 47.62 0.00 0.00 0.00 0.00 0.00 01:18:39.159 [2024-12-09T05:13:30.776Z] =================================================================================================================== 01:18:39.159 [2024-12-09T05:13:30.776Z] Total : 12191.83 47.62 0.00 0.00 0.00 0.00 0.00 01:18:39.159 01:18:40.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:40.120 Nvme0n1 : 7.00 11992.43 46.85 0.00 0.00 0.00 0.00 0.00 01:18:40.120 [2024-12-09T05:13:31.737Z] =================================================================================================================== 01:18:40.120 [2024-12-09T05:13:31.738Z] Total : 11992.43 46.85 0.00 0.00 0.00 0.00 0.00 01:18:40.121 01:18:41.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:41.067 Nvme0n1 : 8.00 12062.50 47.12 0.00 0.00 0.00 0.00 0.00 01:18:41.067 [2024-12-09T05:13:32.684Z] =================================================================================================================== 01:18:41.067 [2024-12-09T05:13:32.684Z] Total : 12062.50 47.12 0.00 0.00 0.00 0.00 0.00 01:18:41.067 01:18:42.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:42.000 Nvme0n1 : 9.00 12131.33 47.39 0.00 0.00 0.00 0.00 0.00 01:18:42.000 [2024-12-09T05:13:33.617Z] =================================================================================================================== 01:18:42.000 [2024-12-09T05:13:33.617Z] Total : 12131.33 47.39 0.00 0.00 0.00 0.00 0.00 01:18:42.000 01:18:42.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:42.932 Nvme0n1 : 10.00 11897.20 46.47 0.00 0.00 0.00 0.00 0.00 01:18:42.932 [2024-12-09T05:13:34.549Z] =================================================================================================================== 01:18:42.932 [2024-12-09T05:13:34.549Z] Total : 11897.20 46.47 0.00 0.00 0.00 0.00 0.00 01:18:42.932 01:18:42.932 01:18:42.932 Latency(us) 01:18:42.932 [2024-12-09T05:13:34.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:42.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:42.932 Nvme0n1 : 10.01 11895.49 46.47 0.00 0.00 10754.28 3528.86 240365.88 01:18:42.932 [2024-12-09T05:13:34.549Z] =================================================================================================================== 01:18:42.932 [2024-12-09T05:13:34.549Z] Total : 11895.49 46.47 0.00 0.00 10754.28 3528.86 240365.88 01:18:42.932 { 01:18:42.932 "results": [ 01:18:42.932 { 01:18:42.932 "job": "Nvme0n1", 01:18:42.932 "core_mask": "0x2", 01:18:42.932 "workload": "randwrite", 01:18:42.932 "status": "finished", 01:18:42.932 "queue_depth": 128, 01:18:42.932 "io_size": 4096, 01:18:42.932 "runtime": 10.006898, 01:18:42.932 "iops": 11895.494487902244, 01:18:42.932 "mibps": 46.46677534336814, 01:18:42.932 "io_failed": 0, 01:18:42.932 "io_timeout": 0, 01:18:42.932 "avg_latency_us": 10754.281758082974, 01:18:42.932 "min_latency_us": 3528.8615384615387, 01:18:42.932 "max_latency_us": 240365.88307692308 01:18:42.932 } 01:18:42.932 ], 01:18:42.932 "core_count": 1 01:18:42.932 } 01:18:42.932 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66305 01:18:42.932 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 66305 ']' 01:18:42.932 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 66305 01:18:42.932 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 01:18:42.932 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:18:42.932 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66305 01:18:42.932 killing process with pid 66305 01:18:42.932 Received shutdown signal, test time was about 10.000000 seconds 01:18:42.932 01:18:42.932 Latency(us) 01:18:42.932 [2024-12-09T05:13:34.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:42.932 [2024-12-09T05:13:34.549Z] =================================================================================================================== 01:18:42.932 [2024-12-09T05:13:34.549Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:18:42.932 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:18:42.932 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:18:42.932 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66305' 01:18:42.932 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 66305 01:18:42.932 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 66305 01:18:43.189 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:18:43.446 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:18:43.711 05:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 01:18:43.711 05:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4b7864d-3bc2-4f80-a765-ea77abaaddfe 01:18:43.711 05:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 01:18:43.711 05:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 01:18:43.711 05:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65726 01:18:43.711 05:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65726 01:18:43.711 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65726 Killed "${NVMF_APP[@]}" "$@" 01:18:43.711 05:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 01:18:43.711 05:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 01:18:43.711 05:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:18:43.711 05:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 01:18:43.711 05:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:18:43.711 05:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=66516 01:18:43.711 05:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 66516 01:18:43.711 05:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 66516 ']' 01:18:43.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:18:43.711 05:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:18:43.711 05:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 01:18:43.711 05:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:18:43.711 05:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 01:18:43.711 05:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 01:18:43.711 05:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:18:43.973 [2024-12-09 05:13:35.359960] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:18:43.973 [2024-12-09 05:13:35.360021] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:18:43.973 [2024-12-09 05:13:35.496186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:43.973 [2024-12-09 05:13:35.532801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:18:43.973 [2024-12-09 05:13:35.534072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:18:43.973 [2024-12-09 05:13:35.534380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:18:43.973 [2024-12-09 05:13:35.534705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:18:43.973 [2024-12-09 05:13:35.534858] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:18:43.973 [2024-12-09 05:13:35.535681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:44.914 05:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:18:44.914 05:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 01:18:44.914 05:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:18:44.914 05:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 01:18:44.914 05:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:18:44.914 05:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:18:44.914 05:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:18:44.914 [2024-12-09 05:13:36.446920] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 01:18:44.914 [2024-12-09 05:13:36.447380] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 01:18:44.914 [2024-12-09 05:13:36.447654] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 01:18:44.914 05:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 01:18:44.914 05:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c2b4abda-52dc-4de2-8566-83ecf1c5f038 01:18:44.914 05:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c2b4abda-52dc-4de2-8566-83ecf1c5f038 01:18:44.914 05:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:18:44.915 05:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 01:18:44.915 05:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:18:44.915 05:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:18:44.915 05:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:18:45.179 05:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c2b4abda-52dc-4de2-8566-83ecf1c5f038 -t 2000 01:18:45.439 [ 01:18:45.439 { 01:18:45.439 "aliases": [ 01:18:45.439 "lvs/lvol" 01:18:45.439 ], 01:18:45.439 "assigned_rate_limits": { 01:18:45.439 "r_mbytes_per_sec": 0, 01:18:45.439 "rw_ios_per_sec": 0, 01:18:45.439 "rw_mbytes_per_sec": 0, 01:18:45.439 "w_mbytes_per_sec": 0 01:18:45.439 }, 01:18:45.439 "block_size": 4096, 01:18:45.439 "claimed": false, 01:18:45.439 "driver_specific": { 01:18:45.440 "lvol": { 01:18:45.440 "base_bdev": "aio_bdev", 01:18:45.440 "clone": false, 01:18:45.440 "esnap_clone": false, 01:18:45.440 "lvol_store_uuid": "f4b7864d-3bc2-4f80-a765-ea77abaaddfe", 01:18:45.440 "num_allocated_clusters": 38, 01:18:45.440 "snapshot": false, 01:18:45.440 "thin_provision": false 01:18:45.440 } 01:18:45.440 }, 01:18:45.440 "name": "c2b4abda-52dc-4de2-8566-83ecf1c5f038", 01:18:45.440 "num_blocks": 38912, 01:18:45.440 "product_name": "Logical Volume", 01:18:45.440 "supported_io_types": { 01:18:45.440 "abort": false, 01:18:45.440 "compare": false, 01:18:45.440 "compare_and_write": false, 01:18:45.440 "copy": false, 01:18:45.440 "flush": false, 01:18:45.440 "get_zone_info": false, 01:18:45.440 "nvme_admin": false, 01:18:45.440 "nvme_io": false, 01:18:45.440 "nvme_io_md": false, 01:18:45.440 "nvme_iov_md": false, 01:18:45.440 "read": true, 01:18:45.440 "reset": true, 01:18:45.440 "seek_data": true, 01:18:45.440 "seek_hole": true, 01:18:45.440 "unmap": true, 01:18:45.440 "write": true, 01:18:45.440 "write_zeroes": true, 01:18:45.440 "zcopy": false, 01:18:45.440 "zone_append": false, 01:18:45.440 "zone_management": false 01:18:45.440 }, 01:18:45.440 "uuid": "c2b4abda-52dc-4de2-8566-83ecf1c5f038", 01:18:45.440 "zoned": false 01:18:45.440 } 01:18:45.440 ] 01:18:45.440 05:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 01:18:45.440 05:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 01:18:45.440 05:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4b7864d-3bc2-4f80-a765-ea77abaaddfe 01:18:45.701 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 01:18:45.701 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4b7864d-3bc2-4f80-a765-ea77abaaddfe 01:18:45.701 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 01:18:45.963 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 01:18:45.963 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:18:45.963 [2024-12-09 05:13:37.508640] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 01:18:45.963 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4b7864d-3bc2-4f80-a765-ea77abaaddfe 01:18:45.963 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 01:18:45.963 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4b7864d-3bc2-4f80-a765-ea77abaaddfe 01:18:45.963 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:18:45.963 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:18:45.963 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:18:45.963 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:18:45.963 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:18:45.963 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:18:45.963 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:18:45.963 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:18:45.963 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4b7864d-3bc2-4f80-a765-ea77abaaddfe 01:18:46.225 2024/12/09 05:13:37 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:f4b7864d-3bc2-4f80-a765-ea77abaaddfe], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 01:18:46.225 request: 01:18:46.225 { 01:18:46.225 "method": "bdev_lvol_get_lvstores", 01:18:46.225 "params": { 01:18:46.225 "uuid": "f4b7864d-3bc2-4f80-a765-ea77abaaddfe" 01:18:46.225 } 01:18:46.225 } 01:18:46.225 Got JSON-RPC error response 01:18:46.225 GoRPCClient: error on JSON-RPC call 01:18:46.225 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 01:18:46.225 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:18:46.225 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:18:46.225 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:18:46.225 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:18:46.486 aio_bdev 01:18:46.486 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c2b4abda-52dc-4de2-8566-83ecf1c5f038 01:18:46.486 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c2b4abda-52dc-4de2-8566-83ecf1c5f038 01:18:46.486 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:18:46.486 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 01:18:46.486 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:18:46.486 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:18:46.486 05:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:18:46.747 05:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c2b4abda-52dc-4de2-8566-83ecf1c5f038 -t 2000 01:18:47.006 [ 01:18:47.006 { 01:18:47.006 "aliases": [ 01:18:47.006 "lvs/lvol" 01:18:47.006 ], 01:18:47.006 "assigned_rate_limits": { 01:18:47.006 "r_mbytes_per_sec": 0, 01:18:47.006 "rw_ios_per_sec": 0, 01:18:47.006 "rw_mbytes_per_sec": 0, 01:18:47.006 "w_mbytes_per_sec": 0 01:18:47.006 }, 01:18:47.006 "block_size": 4096, 01:18:47.006 "claimed": false, 01:18:47.006 "driver_specific": { 01:18:47.006 "lvol": { 01:18:47.006 "base_bdev": "aio_bdev", 01:18:47.006 "clone": false, 01:18:47.006 "esnap_clone": false, 01:18:47.006 "lvol_store_uuid": "f4b7864d-3bc2-4f80-a765-ea77abaaddfe", 01:18:47.006 "num_allocated_clusters": 38, 01:18:47.006 "snapshot": false, 01:18:47.006 "thin_provision": false 01:18:47.006 } 01:18:47.006 }, 01:18:47.006 "name": "c2b4abda-52dc-4de2-8566-83ecf1c5f038", 01:18:47.006 "num_blocks": 38912, 01:18:47.006 "product_name": "Logical Volume", 01:18:47.006 "supported_io_types": { 01:18:47.006 "abort": false, 01:18:47.006 "compare": false, 01:18:47.006 "compare_and_write": false, 01:18:47.006 "copy": false, 01:18:47.006 "flush": false, 01:18:47.006 "get_zone_info": false, 01:18:47.006 "nvme_admin": false, 01:18:47.006 "nvme_io": false, 01:18:47.006 "nvme_io_md": false, 01:18:47.006 "nvme_iov_md": false, 01:18:47.006 "read": true, 01:18:47.006 "reset": true, 01:18:47.006 "seek_data": true, 01:18:47.006 "seek_hole": true, 01:18:47.006 "unmap": true, 01:18:47.006 "write": true, 01:18:47.006 "write_zeroes": true, 01:18:47.006 "zcopy": false, 01:18:47.006 "zone_append": false, 01:18:47.006 "zone_management": false 01:18:47.006 }, 01:18:47.006 "uuid": "c2b4abda-52dc-4de2-8566-83ecf1c5f038", 01:18:47.006 "zoned": false 01:18:47.006 } 01:18:47.006 ] 01:18:47.006 05:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 01:18:47.007 05:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4b7864d-3bc2-4f80-a765-ea77abaaddfe 01:18:47.007 05:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 01:18:47.007 05:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 01:18:47.007 05:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4b7864d-3bc2-4f80-a765-ea77abaaddfe 01:18:47.007 05:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 01:18:47.264 05:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 01:18:47.264 05:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c2b4abda-52dc-4de2-8566-83ecf1c5f038 01:18:47.523 05:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f4b7864d-3bc2-4f80-a765-ea77abaaddfe 01:18:47.781 05:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:18:48.040 05:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:18:48.298 01:18:48.298 real 0m18.689s 01:18:48.298 user 0m38.801s 01:18:48.298 sys 0m5.542s 01:18:48.298 05:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:48.298 05:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:18:48.298 ************************************ 01:18:48.298 END TEST lvs_grow_dirty 01:18:48.298 ************************************ 01:18:48.298 05:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 01:18:48.298 05:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 01:18:48.298 05:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 01:18:48.298 05:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 01:18:48.298 05:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:18:48.298 05:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 01:18:48.298 05:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 01:18:48.298 05:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 01:18:48.298 05:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:18:48.298 nvmf_trace.0 01:18:48.298 05:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 01:18:48.298 05:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 01:18:48.298 05:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 01:18:48.298 05:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 01:18:49.677 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:18:49.677 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 01:18:49.677 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 01:18:49.677 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:18:49.677 rmmod nvme_tcp 01:18:49.677 rmmod nvme_fabrics 01:18:49.677 rmmod nvme_keyring 01:18:49.677 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:18:49.677 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 01:18:49.677 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 01:18:49.677 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 66516 ']' 01:18:49.677 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 66516 01:18:49.677 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 66516 ']' 01:18:49.677 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 66516 01:18:49.677 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 01:18:49.677 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:18:49.677 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66516 01:18:49.677 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:18:49.677 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:18:49.677 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66516' 01:18:49.677 killing process with pid 66516 01:18:49.677 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 66516 01:18:49.677 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 66516 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 01:18:49.936 01:18:49.936 real 0m38.398s 01:18:49.936 user 1m0.861s 01:18:49.936 sys 0m8.921s 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:49.936 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:18:49.936 ************************************ 01:18:49.936 END TEST nvmf_lvs_grow 01:18:49.936 ************************************ 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:18:50.197 ************************************ 01:18:50.197 START TEST nvmf_bdev_io_wait 01:18:50.197 ************************************ 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 01:18:50.197 * Looking for test storage... 01:18:50.197 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:18:50.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:50.197 --rc genhtml_branch_coverage=1 01:18:50.197 --rc genhtml_function_coverage=1 01:18:50.197 --rc genhtml_legend=1 01:18:50.197 --rc geninfo_all_blocks=1 01:18:50.197 --rc geninfo_unexecuted_blocks=1 01:18:50.197 01:18:50.197 ' 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:18:50.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:50.197 --rc genhtml_branch_coverage=1 01:18:50.197 --rc genhtml_function_coverage=1 01:18:50.197 --rc genhtml_legend=1 01:18:50.197 --rc geninfo_all_blocks=1 01:18:50.197 --rc geninfo_unexecuted_blocks=1 01:18:50.197 01:18:50.197 ' 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:18:50.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:50.197 --rc genhtml_branch_coverage=1 01:18:50.197 --rc genhtml_function_coverage=1 01:18:50.197 --rc genhtml_legend=1 01:18:50.197 --rc geninfo_all_blocks=1 01:18:50.197 --rc geninfo_unexecuted_blocks=1 01:18:50.197 01:18:50.197 ' 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:18:50.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:50.197 --rc genhtml_branch_coverage=1 01:18:50.197 --rc genhtml_function_coverage=1 01:18:50.197 --rc genhtml_legend=1 01:18:50.197 --rc geninfo_all_blocks=1 01:18:50.197 --rc geninfo_unexecuted_blocks=1 01:18:50.197 01:18:50.197 ' 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:18:50.197 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:18:50.198 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:18:50.198 Cannot find device "nvmf_init_br" 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:18:50.198 Cannot find device "nvmf_init_br2" 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:18:50.198 Cannot find device "nvmf_tgt_br" 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:18:50.198 Cannot find device "nvmf_tgt_br2" 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:18:50.198 Cannot find device "nvmf_init_br" 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 01:18:50.198 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:18:50.457 Cannot find device "nvmf_init_br2" 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:18:50.457 Cannot find device "nvmf_tgt_br" 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:18:50.457 Cannot find device "nvmf_tgt_br2" 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:18:50.457 Cannot find device "nvmf_br" 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:18:50.457 Cannot find device "nvmf_init_if" 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:18:50.457 Cannot find device "nvmf_init_if2" 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:18:50.457 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:18:50.457 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:18:50.457 05:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:18:50.457 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:18:50.457 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:18:50.457 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:18:50.457 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:18:50.457 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:18:50.457 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:18:50.457 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:18:50.457 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:18:50.457 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 01:18:50.457 01:18:50.457 --- 10.0.0.3 ping statistics --- 01:18:50.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:50.457 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 01:18:50.457 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:18:50.457 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:18:50.457 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 01:18:50.457 01:18:50.457 --- 10.0.0.4 ping statistics --- 01:18:50.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:50.457 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 01:18:50.457 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:18:50.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:18:50.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 01:18:50.457 01:18:50.457 --- 10.0.0.1 ping statistics --- 01:18:50.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:50.457 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 01:18:50.457 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:18:50.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:18:50.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 01:18:50.457 01:18:50.457 --- 10.0.0.2 ping statistics --- 01:18:50.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:50.457 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:18:50.457 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:18:50.457 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 01:18:50.457 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:18:50.457 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:18:50.457 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:18:50.457 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:18:50.457 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:18:50.457 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:18:50.457 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:18:50.457 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 01:18:50.458 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:18:50.458 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 01:18:50.458 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:50.458 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=66990 01:18:50.458 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 66990 01:18:50.458 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 66990 ']' 01:18:50.458 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 01:18:50.458 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:18:50.458 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 01:18:50.458 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:18:50.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:18:50.458 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 01:18:50.458 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:50.715 [2024-12-09 05:13:42.078009] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:18:50.715 [2024-12-09 05:13:42.078059] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:18:50.715 [2024-12-09 05:13:42.215664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:18:50.715 [2024-12-09 05:13:42.252445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:18:50.715 [2024-12-09 05:13:42.252499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:18:50.715 [2024-12-09 05:13:42.252509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:18:50.715 [2024-12-09 05:13:42.252518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:18:50.715 [2024-12-09 05:13:42.252528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:18:50.715 [2024-12-09 05:13:42.253496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:18:50.715 [2024-12-09 05:13:42.253624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:18:50.715 [2024-12-09 05:13:42.253730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:18:50.715 [2024-12-09 05:13:42.253753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:51.673 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:18:51.673 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 01:18:51.673 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:18:51.673 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 01:18:51.673 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:51.673 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:18:51.673 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 01:18:51.673 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:51.673 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:51.673 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:51.673 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 01:18:51.673 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:51.673 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:51.673 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:51.673 05:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:51.673 [2024-12-09 05:13:43.004051] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:51.673 Malloc0 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:51.673 [2024-12-09 05:13:43.047758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=67043 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=67045 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=67047 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 01:18:51.673 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:18:51.673 { 01:18:51.673 "params": { 01:18:51.673 "name": "Nvme$subsystem", 01:18:51.673 "trtype": "$TEST_TRANSPORT", 01:18:51.673 "traddr": "$NVMF_FIRST_TARGET_IP", 01:18:51.673 "adrfam": "ipv4", 01:18:51.673 "trsvcid": "$NVMF_PORT", 01:18:51.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:18:51.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:18:51.673 "hdgst": ${hdgst:-false}, 01:18:51.674 "ddgst": ${ddgst:-false} 01:18:51.674 }, 01:18:51.674 "method": "bdev_nvme_attach_controller" 01:18:51.674 } 01:18:51.674 EOF 01:18:51.674 )") 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=67049 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:18:51.674 { 01:18:51.674 "params": { 01:18:51.674 "name": "Nvme$subsystem", 01:18:51.674 "trtype": "$TEST_TRANSPORT", 01:18:51.674 "traddr": "$NVMF_FIRST_TARGET_IP", 01:18:51.674 "adrfam": "ipv4", 01:18:51.674 "trsvcid": "$NVMF_PORT", 01:18:51.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:18:51.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:18:51.674 "hdgst": ${hdgst:-false}, 01:18:51.674 "ddgst": ${ddgst:-false} 01:18:51.674 }, 01:18:51.674 "method": "bdev_nvme_attach_controller" 01:18:51.674 } 01:18:51.674 EOF 01:18:51.674 )") 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:18:51.674 { 01:18:51.674 "params": { 01:18:51.674 "name": "Nvme$subsystem", 01:18:51.674 "trtype": "$TEST_TRANSPORT", 01:18:51.674 "traddr": "$NVMF_FIRST_TARGET_IP", 01:18:51.674 "adrfam": "ipv4", 01:18:51.674 "trsvcid": "$NVMF_PORT", 01:18:51.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:18:51.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:18:51.674 "hdgst": ${hdgst:-false}, 01:18:51.674 "ddgst": ${ddgst:-false} 01:18:51.674 }, 01:18:51.674 "method": "bdev_nvme_attach_controller" 01:18:51.674 } 01:18:51.674 EOF 01:18:51.674 )") 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:18:51.674 { 01:18:51.674 "params": { 01:18:51.674 "name": "Nvme$subsystem", 01:18:51.674 "trtype": "$TEST_TRANSPORT", 01:18:51.674 "traddr": "$NVMF_FIRST_TARGET_IP", 01:18:51.674 "adrfam": "ipv4", 01:18:51.674 "trsvcid": "$NVMF_PORT", 01:18:51.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:18:51.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:18:51.674 "hdgst": ${hdgst:-false}, 01:18:51.674 "ddgst": ${ddgst:-false} 01:18:51.674 }, 01:18:51.674 "method": "bdev_nvme_attach_controller" 01:18:51.674 } 01:18:51.674 EOF 01:18:51.674 )") 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:18:51.674 "params": { 01:18:51.674 "name": "Nvme1", 01:18:51.674 "trtype": "tcp", 01:18:51.674 "traddr": "10.0.0.3", 01:18:51.674 "adrfam": "ipv4", 01:18:51.674 "trsvcid": "4420", 01:18:51.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:18:51.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:18:51.674 "hdgst": false, 01:18:51.674 "ddgst": false 01:18:51.674 }, 01:18:51.674 "method": "bdev_nvme_attach_controller" 01:18:51.674 }' 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:18:51.674 "params": { 01:18:51.674 "name": "Nvme1", 01:18:51.674 "trtype": "tcp", 01:18:51.674 "traddr": "10.0.0.3", 01:18:51.674 "adrfam": "ipv4", 01:18:51.674 "trsvcid": "4420", 01:18:51.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:18:51.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:18:51.674 "hdgst": false, 01:18:51.674 "ddgst": false 01:18:51.674 }, 01:18:51.674 "method": "bdev_nvme_attach_controller" 01:18:51.674 }' 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:18:51.674 "params": { 01:18:51.674 "name": "Nvme1", 01:18:51.674 "trtype": "tcp", 01:18:51.674 "traddr": "10.0.0.3", 01:18:51.674 "adrfam": "ipv4", 01:18:51.674 "trsvcid": "4420", 01:18:51.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:18:51.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:18:51.674 "hdgst": false, 01:18:51.674 "ddgst": false 01:18:51.674 }, 01:18:51.674 "method": "bdev_nvme_attach_controller" 01:18:51.674 }' 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:18:51.674 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:18:51.674 "params": { 01:18:51.674 "name": "Nvme1", 01:18:51.674 "trtype": "tcp", 01:18:51.674 "traddr": "10.0.0.3", 01:18:51.674 "adrfam": "ipv4", 01:18:51.674 "trsvcid": "4420", 01:18:51.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:18:51.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:18:51.674 "hdgst": false, 01:18:51.674 "ddgst": false 01:18:51.674 }, 01:18:51.674 "method": "bdev_nvme_attach_controller" 01:18:51.674 }' 01:18:51.674 [2024-12-09 05:13:43.089862] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:18:51.674 [2024-12-09 05:13:43.089908] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 01:18:51.674 [2024-12-09 05:13:43.100667] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:18:51.674 [2024-12-09 05:13:43.100812] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 01:18:51.674 [2024-12-09 05:13:43.103883] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:18:51.674 [2024-12-09 05:13:43.103932] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 01:18:51.675 05:13:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 67043 01:18:51.675 [2024-12-09 05:13:43.110178] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:18:51.675 [2024-12-09 05:13:43.110231] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 01:18:51.675 [2024-12-09 05:13:43.266750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:51.933 [2024-12-09 05:13:43.295082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 01:18:51.933 [2024-12-09 05:13:43.323192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:51.933 [2024-12-09 05:13:43.355485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 01:18:51.933 [2024-12-09 05:13:43.360159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:51.933 [2024-12-09 05:13:43.388777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:18:51.933 [2024-12-09 05:13:43.407581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:51.933 Running I/O for 1 seconds... 01:18:51.933 [2024-12-09 05:13:43.435425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 01:18:51.933 Running I/O for 1 seconds... 01:18:51.933 Running I/O for 1 seconds... 01:18:52.190 Running I/O for 1 seconds... 01:18:53.124 192680.00 IOPS, 752.66 MiB/s 01:18:53.124 Latency(us) 01:18:53.124 [2024-12-09T05:13:44.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:53.124 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 01:18:53.124 Nvme1n1 : 1.00 192328.54 751.28 0.00 0.00 662.32 286.72 1814.84 01:18:53.124 [2024-12-09T05:13:44.741Z] =================================================================================================================== 01:18:53.124 [2024-12-09T05:13:44.741Z] Total : 192328.54 751.28 0.00 0.00 662.32 286.72 1814.84 01:18:53.124 8682.00 IOPS, 33.91 MiB/s 01:18:53.124 Latency(us) 01:18:53.124 [2024-12-09T05:13:44.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:53.124 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 01:18:53.124 Nvme1n1 : 1.01 8700.71 33.99 0.00 0.00 14646.95 8318.03 26617.70 01:18:53.124 [2024-12-09T05:13:44.741Z] =================================================================================================================== 01:18:53.124 [2024-12-09T05:13:44.741Z] Total : 8700.71 33.99 0.00 0.00 14646.95 8318.03 26617.70 01:18:53.124 12956.00 IOPS, 50.61 MiB/s 01:18:53.124 Latency(us) 01:18:53.124 [2024-12-09T05:13:44.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:53.124 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 01:18:53.125 Nvme1n1 : 1.01 13016.49 50.85 0.00 0.00 9803.11 4007.78 20769.87 01:18:53.125 [2024-12-09T05:13:44.742Z] =================================================================================================================== 01:18:53.125 [2024-12-09T05:13:44.742Z] Total : 13016.49 50.85 0.00 0.00 9803.11 4007.78 20769.87 01:18:53.125 9132.00 IOPS, 35.67 MiB/s 01:18:53.125 Latency(us) 01:18:53.125 [2024-12-09T05:13:44.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:53.125 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 01:18:53.125 Nvme1n1 : 1.00 9247.57 36.12 0.00 0.00 13823.59 1487.16 37506.76 01:18:53.125 [2024-12-09T05:13:44.742Z] =================================================================================================================== 01:18:53.125 [2024-12-09T05:13:44.742Z] Total : 9247.57 36.12 0.00 0.00 13823.59 1487.16 37506.76 01:18:53.125 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 67045 01:18:53.125 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 67047 01:18:53.125 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 67049 01:18:53.125 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:18:53.125 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:53.125 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:53.125 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:53.125 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 01:18:53.125 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 01:18:53.125 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 01:18:53.125 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 01:18:53.125 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:18:53.125 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 01:18:53.125 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 01:18:53.125 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:18:53.383 rmmod nvme_tcp 01:18:53.383 rmmod nvme_fabrics 01:18:53.383 rmmod nvme_keyring 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 66990 ']' 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 66990 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 66990 ']' 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 66990 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66990 01:18:53.383 killing process with pid 66990 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66990' 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 66990 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 66990 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:18:53.383 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:18:53.642 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:18:53.642 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:18:53.642 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:18:53.642 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:18:53.642 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:18:53.642 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:18:53.642 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:18:53.642 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:18:53.642 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:18:53.642 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 01:18:53.642 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:18:53.642 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:18:53.642 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:18:53.642 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 01:18:53.642 01:18:53.642 real 0m3.563s 01:18:53.642 user 0m15.510s 01:18:53.642 sys 0m1.414s 01:18:53.642 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:53.642 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:53.642 ************************************ 01:18:53.642 END TEST nvmf_bdev_io_wait 01:18:53.642 ************************************ 01:18:53.642 05:13:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 01:18:53.642 05:13:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:18:53.642 05:13:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:53.642 05:13:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:18:53.642 ************************************ 01:18:53.642 START TEST nvmf_queue_depth 01:18:53.642 ************************************ 01:18:53.642 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 01:18:53.900 * Looking for test storage... 01:18:53.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:18:53.900 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:18:53.900 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:18:53.900 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 01:18:53.900 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:18:53.900 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:18:53.900 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 01:18:53.900 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:18:53.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:53.901 --rc genhtml_branch_coverage=1 01:18:53.901 --rc genhtml_function_coverage=1 01:18:53.901 --rc genhtml_legend=1 01:18:53.901 --rc geninfo_all_blocks=1 01:18:53.901 --rc geninfo_unexecuted_blocks=1 01:18:53.901 01:18:53.901 ' 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:18:53.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:53.901 --rc genhtml_branch_coverage=1 01:18:53.901 --rc genhtml_function_coverage=1 01:18:53.901 --rc genhtml_legend=1 01:18:53.901 --rc geninfo_all_blocks=1 01:18:53.901 --rc geninfo_unexecuted_blocks=1 01:18:53.901 01:18:53.901 ' 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:18:53.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:53.901 --rc genhtml_branch_coverage=1 01:18:53.901 --rc genhtml_function_coverage=1 01:18:53.901 --rc genhtml_legend=1 01:18:53.901 --rc geninfo_all_blocks=1 01:18:53.901 --rc geninfo_unexecuted_blocks=1 01:18:53.901 01:18:53.901 ' 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:18:53.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:53.901 --rc genhtml_branch_coverage=1 01:18:53.901 --rc genhtml_function_coverage=1 01:18:53.901 --rc genhtml_legend=1 01:18:53.901 --rc geninfo_all_blocks=1 01:18:53.901 --rc geninfo_unexecuted_blocks=1 01:18:53.901 01:18:53.901 ' 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:18:53.901 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:18:53.901 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:18:53.902 Cannot find device "nvmf_init_br" 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:18:53.902 Cannot find device "nvmf_init_br2" 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:18:53.902 Cannot find device "nvmf_tgt_br" 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:18:53.902 Cannot find device "nvmf_tgt_br2" 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:18:53.902 Cannot find device "nvmf_init_br" 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:18:53.902 Cannot find device "nvmf_init_br2" 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:18:53.902 Cannot find device "nvmf_tgt_br" 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:18:53.902 Cannot find device "nvmf_tgt_br2" 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:18:53.902 Cannot find device "nvmf_br" 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:18:53.902 Cannot find device "nvmf_init_if" 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:18:53.902 Cannot find device "nvmf_init_if2" 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:18:53.902 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:18:53.902 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:18:53.902 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:18:54.163 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:18:54.163 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 01:18:54.163 01:18:54.163 --- 10.0.0.3 ping statistics --- 01:18:54.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:54.163 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:18:54.163 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:18:54.163 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 01:18:54.163 01:18:54.163 --- 10.0.0.4 ping statistics --- 01:18:54.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:54.163 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:18:54.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:18:54.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 01:18:54.163 01:18:54.163 --- 10.0.0.1 ping statistics --- 01:18:54.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:54.163 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:18:54.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:18:54.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.033 ms 01:18:54.163 01:18:54.163 --- 10.0.0.2 ping statistics --- 01:18:54.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:54.163 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=67312 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 67312 01:18:54.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 67312 ']' 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:18:54.163 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 01:18:54.164 05:13:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:18:54.164 [2024-12-09 05:13:45.721100] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:18:54.164 [2024-12-09 05:13:45.721164] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:18:54.422 [2024-12-09 05:13:45.865835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:54.422 [2024-12-09 05:13:45.907949] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:18:54.422 [2024-12-09 05:13:45.907994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:18:54.422 [2024-12-09 05:13:45.908001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:18:54.422 [2024-12-09 05:13:45.908006] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:18:54.422 [2024-12-09 05:13:45.908011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:18:54.422 [2024-12-09 05:13:45.908278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:18:55.361 [2024-12-09 05:13:46.668965] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:18:55.361 Malloc0 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:18:55.361 [2024-12-09 05:13:46.715623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:55.361 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=67361 01:18:55.362 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:18:55.362 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 67361 /var/tmp/bdevperf.sock 01:18:55.362 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 01:18:55.362 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 67361 ']' 01:18:55.362 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:18:55.362 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 01:18:55.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:18:55.362 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:18:55.362 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 01:18:55.362 05:13:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:18:55.362 [2024-12-09 05:13:46.761642] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:18:55.362 [2024-12-09 05:13:46.761721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67361 ] 01:18:55.362 [2024-12-09 05:13:46.902728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:55.362 [2024-12-09 05:13:46.958214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:56.300 05:13:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:18:56.300 05:13:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 01:18:56.300 05:13:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:18:56.300 05:13:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:56.300 05:13:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:18:56.300 NVMe0n1 01:18:56.300 05:13:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:56.300 05:13:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:18:56.300 Running I/O for 10 seconds... 01:18:58.260 8192.00 IOPS, 32.00 MiB/s [2024-12-09T05:13:51.263Z] 8694.00 IOPS, 33.96 MiB/s [2024-12-09T05:13:52.250Z] 9737.00 IOPS, 38.04 MiB/s [2024-12-09T05:13:53.240Z] 10478.25 IOPS, 40.93 MiB/s [2024-12-09T05:13:54.179Z] 10947.00 IOPS, 42.76 MiB/s [2024-12-09T05:13:55.119Z] 11262.50 IOPS, 43.99 MiB/s [2024-12-09T05:13:56.066Z] 11236.57 IOPS, 43.89 MiB/s [2024-12-09T05:13:57.009Z] 11399.75 IOPS, 44.53 MiB/s [2024-12-09T05:13:57.952Z] 11608.78 IOPS, 45.35 MiB/s [2024-12-09T05:13:57.952Z] 11768.20 IOPS, 45.97 MiB/s 01:19:06.335 Latency(us) 01:19:06.335 [2024-12-09T05:13:57.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:19:06.335 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 01:19:06.335 Verification LBA range: start 0x0 length 0x4000 01:19:06.335 NVMe0n1 : 10.06 11801.33 46.10 0.00 0.00 86472.72 22887.19 76223.41 01:19:06.335 [2024-12-09T05:13:57.952Z] =================================================================================================================== 01:19:06.335 [2024-12-09T05:13:57.952Z] Total : 11801.33 46.10 0.00 0.00 86472.72 22887.19 76223.41 01:19:06.335 { 01:19:06.335 "results": [ 01:19:06.335 { 01:19:06.335 "job": "NVMe0n1", 01:19:06.335 "core_mask": "0x1", 01:19:06.335 "workload": "verify", 01:19:06.335 "status": "finished", 01:19:06.335 "verify_range": { 01:19:06.335 "start": 0, 01:19:06.335 "length": 16384 01:19:06.335 }, 01:19:06.335 "queue_depth": 1024, 01:19:06.335 "io_size": 4096, 01:19:06.335 "runtime": 10.057258, 01:19:06.335 "iops": 11801.327956387317, 01:19:06.335 "mibps": 46.098937329637955, 01:19:06.335 "io_failed": 0, 01:19:06.335 "io_timeout": 0, 01:19:06.335 "avg_latency_us": 86472.72209006472, 01:19:06.335 "min_latency_us": 22887.187692307692, 01:19:06.335 "max_latency_us": 76223.40923076923 01:19:06.335 } 01:19:06.335 ], 01:19:06.335 "core_count": 1 01:19:06.335 } 01:19:06.335 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 67361 01:19:06.335 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 67361 ']' 01:19:06.335 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 67361 01:19:06.335 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 01:19:06.335 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:19:06.335 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67361 01:19:06.596 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:19:06.596 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:19:06.596 killing process with pid 67361 01:19:06.597 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67361' 01:19:06.597 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 67361 01:19:06.597 Received shutdown signal, test time was about 10.000000 seconds 01:19:06.597 01:19:06.597 Latency(us) 01:19:06.597 [2024-12-09T05:13:58.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:19:06.597 [2024-12-09T05:13:58.214Z] =================================================================================================================== 01:19:06.597 [2024-12-09T05:13:58.214Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:19:06.597 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 67361 01:19:06.597 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 01:19:06.597 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 01:19:06.597 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 01:19:06.597 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 01:19:06.597 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:19:06.597 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 01:19:06.597 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 01:19:06.597 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:19:06.597 rmmod nvme_tcp 01:19:06.597 rmmod nvme_fabrics 01:19:06.597 rmmod nvme_keyring 01:19:06.597 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:19:06.597 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 01:19:06.597 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 01:19:06.597 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 67312 ']' 01:19:06.597 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 67312 01:19:06.597 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 67312 ']' 01:19:06.597 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 67312 01:19:06.597 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 01:19:06.597 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:19:06.597 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67312 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:19:06.858 killing process with pid 67312 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67312' 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 67312 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 67312 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:19:06.858 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:19:07.120 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:19:07.120 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:19:07.120 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:19:07.120 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 01:19:07.120 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:07.120 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:19:07.120 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:07.120 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 01:19:07.120 01:19:07.120 real 0m13.363s 01:19:07.120 user 0m23.371s 01:19:07.120 sys 0m1.604s 01:19:07.120 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:07.120 ************************************ 01:19:07.120 END TEST nvmf_queue_depth 01:19:07.120 ************************************ 01:19:07.120 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:19:07.120 05:13:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 01:19:07.120 05:13:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:19:07.120 05:13:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:07.120 05:13:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:19:07.120 ************************************ 01:19:07.120 START TEST nvmf_target_multipath 01:19:07.120 ************************************ 01:19:07.120 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 01:19:07.120 * Looking for test storage... 01:19:07.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:19:07.120 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:19:07.120 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 01:19:07.120 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:19:07.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:07.382 --rc genhtml_branch_coverage=1 01:19:07.382 --rc genhtml_function_coverage=1 01:19:07.382 --rc genhtml_legend=1 01:19:07.382 --rc geninfo_all_blocks=1 01:19:07.382 --rc geninfo_unexecuted_blocks=1 01:19:07.382 01:19:07.382 ' 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:19:07.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:07.382 --rc genhtml_branch_coverage=1 01:19:07.382 --rc genhtml_function_coverage=1 01:19:07.382 --rc genhtml_legend=1 01:19:07.382 --rc geninfo_all_blocks=1 01:19:07.382 --rc geninfo_unexecuted_blocks=1 01:19:07.382 01:19:07.382 ' 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:19:07.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:07.382 --rc genhtml_branch_coverage=1 01:19:07.382 --rc genhtml_function_coverage=1 01:19:07.382 --rc genhtml_legend=1 01:19:07.382 --rc geninfo_all_blocks=1 01:19:07.382 --rc geninfo_unexecuted_blocks=1 01:19:07.382 01:19:07.382 ' 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:19:07.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:07.382 --rc genhtml_branch_coverage=1 01:19:07.382 --rc genhtml_function_coverage=1 01:19:07.382 --rc genhtml_legend=1 01:19:07.382 --rc geninfo_all_blocks=1 01:19:07.382 --rc geninfo_unexecuted_blocks=1 01:19:07.382 01:19:07.382 ' 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:19:07.382 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:19:07.383 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:19:07.383 Cannot find device "nvmf_init_br" 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:19:07.383 Cannot find device "nvmf_init_br2" 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:19:07.383 Cannot find device "nvmf_tgt_br" 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:19:07.383 Cannot find device "nvmf_tgt_br2" 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:19:07.383 Cannot find device "nvmf_init_br" 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:19:07.383 Cannot find device "nvmf_init_br2" 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:19:07.383 Cannot find device "nvmf_tgt_br" 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 01:19:07.383 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:19:07.383 Cannot find device "nvmf_tgt_br2" 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:19:07.384 Cannot find device "nvmf_br" 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:19:07.384 Cannot find device "nvmf_init_if" 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:19:07.384 Cannot find device "nvmf_init_if2" 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:19:07.384 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:19:07.384 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:19:07.384 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:19:07.644 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:19:07.644 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:19:07.644 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 01:19:07.644 01:19:07.644 --- 10.0.0.3 ping statistics --- 01:19:07.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:07.644 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:19:07.644 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:19:07.644 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 01:19:07.644 01:19:07.644 --- 10.0.0.4 ping statistics --- 01:19:07.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:07.644 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:19:07.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:19:07.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 01:19:07.644 01:19:07.644 --- 10.0.0.1 ping statistics --- 01:19:07.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:07.644 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:19:07.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:19:07.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.032 ms 01:19:07.644 01:19:07.644 --- 10.0.0.2 ping statistics --- 01:19:07.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:07.644 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=67748 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 67748 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 67748 ']' 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:07.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:07.644 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:19:07.644 [2024-12-09 05:13:59.141029] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:19:07.644 [2024-12-09 05:13:59.141087] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:19:07.904 [2024-12-09 05:13:59.277418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:19:07.904 [2024-12-09 05:13:59.328083] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:19:07.904 [2024-12-09 05:13:59.328147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:19:07.904 [2024-12-09 05:13:59.328154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:19:07.904 [2024-12-09 05:13:59.328160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:19:07.904 [2024-12-09 05:13:59.328165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:19:07.904 [2024-12-09 05:13:59.329364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:19:07.904 [2024-12-09 05:13:59.329666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:19:07.904 [2024-12-09 05:13:59.330718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:19:07.904 [2024-12-09 05:13:59.330842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:08.476 05:14:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:08.476 05:14:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 01:19:08.476 05:14:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:19:08.476 05:14:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 01:19:08.476 05:14:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:19:08.476 05:14:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:19:08.737 05:14:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:19:08.737 [2024-12-09 05:14:00.289868] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:19:08.737 05:14:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:19:08.997 Malloc0 01:19:08.997 05:14:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 01:19:09.257 05:14:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:19:09.517 05:14:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:19:09.801 [2024-12-09 05:14:01.206056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:19:09.801 05:14:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 01:19:10.062 [2024-12-09 05:14:01.422425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 01:19:10.062 05:14:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 01:19:10.062 05:14:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 01:19:10.322 05:14:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 01:19:10.322 05:14:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 01:19:10.322 05:14:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:19:10.322 05:14:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:19:10.322 05:14:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67884 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 01:19:12.864 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 01:19:12.865 [global] 01:19:12.865 thread=1 01:19:12.865 invalidate=1 01:19:12.865 rw=randrw 01:19:12.865 time_based=1 01:19:12.865 runtime=6 01:19:12.865 ioengine=libaio 01:19:12.865 direct=1 01:19:12.865 bs=4096 01:19:12.865 iodepth=128 01:19:12.865 norandommap=0 01:19:12.865 numjobs=1 01:19:12.865 01:19:12.865 verify_dump=1 01:19:12.865 verify_backlog=512 01:19:12.865 verify_state_save=0 01:19:12.865 do_verify=1 01:19:12.865 verify=crc32c-intel 01:19:12.865 [job0] 01:19:12.865 filename=/dev/nvme0n1 01:19:12.865 Could not set queue depth (nvme0n1) 01:19:12.865 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:19:12.865 fio-3.35 01:19:12.865 Starting 1 thread 01:19:13.438 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:19:13.703 05:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 01:19:13.965 05:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 01:19:13.965 05:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 01:19:13.965 05:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:19:13.965 05:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:19:13.965 05:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:19:13.965 05:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:19:13.965 05:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 01:19:13.965 05:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 01:19:13.965 05:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:19:13.965 05:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:19:13.965 05:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:19:13.965 05:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:19:13.965 05:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:19:14.908 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:19:14.908 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:19:14.908 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:19:14.908 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:19:15.169 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 01:19:15.169 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 01:19:15.169 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 01:19:15.169 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:19:15.169 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:19:15.169 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:19:15.169 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:19:15.169 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 01:19:15.169 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 01:19:15.169 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:19:15.169 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:19:15.169 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:19:15.169 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:19:15.169 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:19:16.554 05:14:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:19:16.554 05:14:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:19:16.554 05:14:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:19:16.554 05:14:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67884 01:19:19.145 01:19:19.145 job0: (groupid=0, jobs=1): err= 0: pid=67911: Mon Dec 9 05:14:10 2024 01:19:19.145 read: IOPS=15.8k, BW=61.9MiB/s (64.9MB/s)(372MiB/6005msec) 01:19:19.145 slat (usec): min=2, max=3384, avg=37.45, stdev=176.48 01:19:19.145 clat (usec): min=1399, max=10126, avg=5528.68, stdev=790.53 01:19:19.145 lat (usec): min=1405, max=10131, avg=5566.13, stdev=799.30 01:19:19.146 clat percentiles (usec): 01:19:19.146 | 1.00th=[ 3458], 5.00th=[ 4424], 10.00th=[ 4817], 20.00th=[ 5145], 01:19:19.146 | 30.00th=[ 5211], 40.00th=[ 5276], 50.00th=[ 5407], 60.00th=[ 5538], 01:19:19.146 | 70.00th=[ 5800], 80.00th=[ 5997], 90.00th=[ 6390], 95.00th=[ 6849], 01:19:19.146 | 99.00th=[ 8094], 99.50th=[ 8455], 99.90th=[ 8848], 99.95th=[ 8979], 01:19:19.146 | 99.99th=[10028] 01:19:19.146 bw ( KiB/s): min=18776, max=40544, per=51.78%, avg=32829.82, stdev=7304.87, samples=11 01:19:19.146 iops : min= 4694, max=10136, avg=8207.45, stdev=1826.22, samples=11 01:19:19.146 write: IOPS=9349, BW=36.5MiB/s (38.3MB/s)(193MiB/5298msec); 0 zone resets 01:19:19.146 slat (usec): min=6, max=1324, avg=44.27, stdev=119.10 01:19:19.146 clat (usec): min=839, max=10044, avg=4757.08, stdev=640.90 01:19:19.146 lat (usec): min=858, max=10059, avg=4801.35, stdev=643.33 01:19:19.146 clat percentiles (usec): 01:19:19.146 | 1.00th=[ 2606], 5.00th=[ 3720], 10.00th=[ 4146], 20.00th=[ 4424], 01:19:19.146 | 30.00th=[ 4555], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 4883], 01:19:19.146 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5342], 95.00th=[ 5538], 01:19:19.146 | 99.00th=[ 6915], 99.50th=[ 7308], 99.90th=[ 8586], 99.95th=[ 8717], 01:19:19.146 | 99.99th=[ 8979] 01:19:19.146 bw ( KiB/s): min=19776, max=39680, per=87.82%, avg=32841.45, stdev=6841.38, samples=11 01:19:19.146 iops : min= 4944, max= 9920, avg=8210.36, stdev=1710.35, samples=11 01:19:19.146 lat (usec) : 1000=0.01% 01:19:19.146 lat (msec) : 2=0.08%, 4=4.74%, 10=95.17%, 20=0.01% 01:19:19.146 cpu : usr=3.66%, sys=17.75%, ctx=9108, majf=0, minf=127 01:19:19.146 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 01:19:19.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:19.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:19:19.146 issued rwts: total=95178,49533,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:19.146 latency : target=0, window=0, percentile=100.00%, depth=128 01:19:19.146 01:19:19.146 Run status group 0 (all jobs): 01:19:19.146 READ: bw=61.9MiB/s (64.9MB/s), 61.9MiB/s-61.9MiB/s (64.9MB/s-64.9MB/s), io=372MiB (390MB), run=6005-6005msec 01:19:19.146 WRITE: bw=36.5MiB/s (38.3MB/s), 36.5MiB/s-36.5MiB/s (38.3MB/s-38.3MB/s), io=193MiB (203MB), run=5298-5298msec 01:19:19.146 01:19:19.146 Disk stats (read/write): 01:19:19.146 nvme0n1: ios=93959/48550, merge=0/0, ticks=492612/218922, in_queue=711534, util=98.56% 01:19:19.146 05:14:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 01:19:19.146 05:14:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 01:19:19.146 05:14:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 01:19:19.146 05:14:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 01:19:19.146 05:14:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:19:19.146 05:14:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:19:19.146 05:14:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:19:19.146 05:14:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:19:19.146 05:14:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 01:19:19.146 05:14:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 01:19:19.146 05:14:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:19:19.146 05:14:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:19:19.146 05:14:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:19:19.146 05:14:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 01:19:19.146 05:14:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:19:20.082 05:14:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:19:20.082 05:14:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:19:20.082 05:14:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:19:20.082 05:14:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 01:19:20.082 05:14:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=68036 01:19:20.082 05:14:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 01:19:20.082 05:14:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 01:19:20.082 [global] 01:19:20.082 thread=1 01:19:20.082 invalidate=1 01:19:20.082 rw=randrw 01:19:20.082 time_based=1 01:19:20.082 runtime=6 01:19:20.082 ioengine=libaio 01:19:20.082 direct=1 01:19:20.082 bs=4096 01:19:20.082 iodepth=128 01:19:20.082 norandommap=0 01:19:20.082 numjobs=1 01:19:20.082 01:19:20.082 verify_dump=1 01:19:20.082 verify_backlog=512 01:19:20.082 verify_state_save=0 01:19:20.083 do_verify=1 01:19:20.083 verify=crc32c-intel 01:19:20.083 [job0] 01:19:20.083 filename=/dev/nvme0n1 01:19:20.083 Could not set queue depth (nvme0n1) 01:19:20.344 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:19:20.344 fio-3.35 01:19:20.344 Starting 1 thread 01:19:21.282 05:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:19:21.282 05:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 01:19:21.542 05:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 01:19:21.542 05:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 01:19:21.542 05:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:19:21.542 05:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:19:21.542 05:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:19:21.542 05:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:19:21.542 05:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 01:19:21.542 05:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 01:19:21.542 05:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:19:21.542 05:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:19:21.542 05:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:19:21.542 05:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:19:21.542 05:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:19:22.482 05:14:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:19:22.482 05:14:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:19:22.482 05:14:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:19:22.482 05:14:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:19:22.741 05:14:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 01:19:22.999 05:14:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 01:19:22.999 05:14:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 01:19:22.999 05:14:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:19:22.999 05:14:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:19:22.999 05:14:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:19:22.999 05:14:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:19:22.999 05:14:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 01:19:22.999 05:14:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 01:19:22.999 05:14:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:19:22.999 05:14:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:19:22.999 05:14:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:19:22.999 05:14:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:19:22.999 05:14:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:19:23.939 05:14:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:19:23.939 05:14:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:19:23.939 05:14:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:19:23.939 05:14:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 68036 01:19:26.485 01:19:26.485 job0: (groupid=0, jobs=1): err= 0: pid=68057: Mon Dec 9 05:14:17 2024 01:19:26.485 read: IOPS=16.3k, BW=63.5MiB/s (66.6MB/s)(382MiB/6004msec) 01:19:26.485 slat (usec): min=2, max=5176, avg=32.74, stdev=173.76 01:19:26.485 clat (usec): min=112, max=23287, avg=5440.02, stdev=1876.95 01:19:26.485 lat (usec): min=124, max=23292, avg=5472.76, stdev=1889.33 01:19:26.485 clat percentiles (usec): 01:19:26.485 | 1.00th=[ 371], 5.00th=[ 848], 10.00th=[ 2900], 20.00th=[ 4817], 01:19:26.485 | 30.00th=[ 5211], 40.00th=[ 5276], 50.00th=[ 5473], 60.00th=[ 5800], 01:19:26.485 | 70.00th=[ 6325], 80.00th=[ 6783], 90.00th=[ 7373], 95.00th=[ 7963], 01:19:26.485 | 99.00th=[ 9110], 99.50th=[10290], 99.90th=[15270], 99.95th=[17433], 01:19:26.485 | 99.99th=[22152] 01:19:26.485 bw ( KiB/s): min= 1952, max=57000, per=51.77%, avg=33687.27, stdev=16281.40, samples=11 01:19:26.485 iops : min= 488, max=14250, avg=8421.82, stdev=4070.35, samples=11 01:19:26.485 write: IOPS=10.4k, BW=40.5MiB/s (42.5MB/s)(198MiB/4900msec); 0 zone resets 01:19:26.485 slat (usec): min=7, max=3699, avg=37.13, stdev=110.50 01:19:26.485 clat (usec): min=85, max=22960, avg=4547.88, stdev=1929.92 01:19:26.485 lat (usec): min=102, max=22977, avg=4585.01, stdev=1938.93 01:19:26.485 clat percentiles (usec): 01:19:26.485 | 1.00th=[ 219], 5.00th=[ 453], 10.00th=[ 1004], 20.00th=[ 3425], 01:19:26.485 | 30.00th=[ 4359], 40.00th=[ 4621], 50.00th=[ 4817], 60.00th=[ 5014], 01:19:26.485 | 70.00th=[ 5276], 80.00th=[ 5932], 90.00th=[ 6652], 95.00th=[ 7046], 01:19:26.485 | 99.00th=[ 7832], 99.50th=[ 8979], 99.90th=[15533], 99.95th=[17433], 01:19:26.485 | 99.99th=[22152] 01:19:26.485 bw ( KiB/s): min= 2016, max=57344, per=81.25%, avg=33690.18, stdev=16119.74, samples=11 01:19:26.485 iops : min= 504, max=14336, avg=8422.55, stdev=4029.93, samples=11 01:19:26.485 lat (usec) : 100=0.01%, 250=0.68%, 500=2.66%, 750=2.57%, 1000=1.20% 01:19:26.485 lat (msec) : 2=2.56%, 4=8.02%, 10=81.82%, 20=0.46%, 50=0.03% 01:19:26.485 cpu : usr=3.61%, sys=17.92%, ctx=12901, majf=0, minf=199 01:19:26.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 01:19:26.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:26.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:19:26.485 issued rwts: total=97678,50794,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:26.485 latency : target=0, window=0, percentile=100.00%, depth=128 01:19:26.485 01:19:26.485 Run status group 0 (all jobs): 01:19:26.485 READ: bw=63.5MiB/s (66.6MB/s), 63.5MiB/s-63.5MiB/s (66.6MB/s-66.6MB/s), io=382MiB (400MB), run=6004-6004msec 01:19:26.485 WRITE: bw=40.5MiB/s (42.5MB/s), 40.5MiB/s-40.5MiB/s (42.5MB/s-42.5MB/s), io=198MiB (208MB), run=4900-4900msec 01:19:26.485 01:19:26.485 Disk stats (read/write): 01:19:26.485 nvme0n1: ios=96453/49781, merge=0/0, ticks=500105/215046, in_queue=715151, util=98.55% 01:19:26.485 05:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:19:26.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 01:19:26.485 05:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:19:26.485 05:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 01:19:26.485 05:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:19:26.485 05:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:19:26.485 05:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:19:26.485 05:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:19:26.485 05:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 01:19:26.485 05:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:19:26.747 rmmod nvme_tcp 01:19:26.747 rmmod nvme_fabrics 01:19:26.747 rmmod nvme_keyring 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 67748 ']' 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 67748 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 67748 ']' 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 67748 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67748 01:19:26.747 killing process with pid 67748 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67748' 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 67748 01:19:26.747 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 67748 01:19:27.008 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:19:27.008 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:19:27.008 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:19:27.008 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 01:19:27.008 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 01:19:27.008 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:19:27.008 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 01:19:27.008 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:19:27.008 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:19:27.008 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:19:27.008 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:19:27.008 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:19:27.008 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:19:27.008 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:19:27.008 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:19:27.008 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:19:27.008 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:19:27.008 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:19:27.008 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 01:19:27.270 ************************************ 01:19:27.270 END TEST nvmf_target_multipath 01:19:27.270 ************************************ 01:19:27.270 01:19:27.270 real 0m20.072s 01:19:27.270 user 1m18.206s 01:19:27.270 sys 0m4.837s 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:19:27.270 ************************************ 01:19:27.270 START TEST nvmf_zcopy 01:19:27.270 ************************************ 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 01:19:27.270 * Looking for test storage... 01:19:27.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:19:27.270 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 01:19:27.545 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 01:19:27.545 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:19:27.545 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 01:19:27.545 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 01:19:27.545 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 01:19:27.545 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 01:19:27.545 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:19:27.545 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 01:19:27.545 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 01:19:27.545 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:19:27.545 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:19:27.545 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 01:19:27.545 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:19:27.545 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:19:27.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:27.545 --rc genhtml_branch_coverage=1 01:19:27.545 --rc genhtml_function_coverage=1 01:19:27.545 --rc genhtml_legend=1 01:19:27.545 --rc geninfo_all_blocks=1 01:19:27.545 --rc geninfo_unexecuted_blocks=1 01:19:27.545 01:19:27.545 ' 01:19:27.545 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:19:27.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:27.546 --rc genhtml_branch_coverage=1 01:19:27.546 --rc genhtml_function_coverage=1 01:19:27.546 --rc genhtml_legend=1 01:19:27.546 --rc geninfo_all_blocks=1 01:19:27.546 --rc geninfo_unexecuted_blocks=1 01:19:27.546 01:19:27.546 ' 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:19:27.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:27.546 --rc genhtml_branch_coverage=1 01:19:27.546 --rc genhtml_function_coverage=1 01:19:27.546 --rc genhtml_legend=1 01:19:27.546 --rc geninfo_all_blocks=1 01:19:27.546 --rc geninfo_unexecuted_blocks=1 01:19:27.546 01:19:27.546 ' 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:19:27.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:27.546 --rc genhtml_branch_coverage=1 01:19:27.546 --rc genhtml_function_coverage=1 01:19:27.546 --rc genhtml_legend=1 01:19:27.546 --rc geninfo_all_blocks=1 01:19:27.546 --rc geninfo_unexecuted_blocks=1 01:19:27.546 01:19:27.546 ' 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:19:27.546 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:19:27.546 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:19:27.547 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:19:27.547 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:19:27.547 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:19:27.547 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:19:27.547 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:19:27.547 Cannot find device "nvmf_init_br" 01:19:27.547 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 01:19:27.547 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:19:27.547 Cannot find device "nvmf_init_br2" 01:19:27.547 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 01:19:27.547 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:19:27.547 Cannot find device "nvmf_tgt_br" 01:19:27.547 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 01:19:27.547 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:19:27.547 Cannot find device "nvmf_tgt_br2" 01:19:27.547 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 01:19:27.547 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:19:27.547 Cannot find device "nvmf_init_br" 01:19:27.547 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 01:19:27.547 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:19:27.547 Cannot find device "nvmf_init_br2" 01:19:27.547 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 01:19:27.547 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:19:27.547 Cannot find device "nvmf_tgt_br" 01:19:27.547 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 01:19:27.547 05:14:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:19:27.547 Cannot find device "nvmf_tgt_br2" 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:19:27.547 Cannot find device "nvmf_br" 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:19:27.547 Cannot find device "nvmf_init_if" 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:19:27.547 Cannot find device "nvmf_init_if2" 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:19:27.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:19:27.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:19:27.547 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:19:27.807 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:19:27.807 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 01:19:27.807 01:19:27.807 --- 10.0.0.3 ping statistics --- 01:19:27.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:27.807 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:19:27.807 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:19:27.807 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 01:19:27.807 01:19:27.807 --- 10.0.0.4 ping statistics --- 01:19:27.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:27.807 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:19:27.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:19:27.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 01:19:27.807 01:19:27.807 --- 10.0.0.1 ping statistics --- 01:19:27.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:27.807 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:19:27.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:19:27.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 01:19:27.807 01:19:27.807 --- 10.0.0.2 ping statistics --- 01:19:27.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:27.807 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=68390 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 68390 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 68390 ']' 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:27.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:27.807 05:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:19:27.807 [2024-12-09 05:14:19.282101] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:19:27.807 [2024-12-09 05:14:19.282146] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:19:27.807 [2024-12-09 05:14:19.411693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:28.067 [2024-12-09 05:14:19.446226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:19:28.067 [2024-12-09 05:14:19.446376] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:19:28.067 [2024-12-09 05:14:19.446784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:19:28.067 [2024-12-09 05:14:19.446871] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:19:28.067 [2024-12-09 05:14:19.446891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:19:28.067 [2024-12-09 05:14:19.447195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:19:28.655 [2024-12-09 05:14:20.168216] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:19:28.655 [2024-12-09 05:14:20.188303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:19:28.655 malloc0 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:19:28.655 { 01:19:28.655 "params": { 01:19:28.655 "name": "Nvme$subsystem", 01:19:28.655 "trtype": "$TEST_TRANSPORT", 01:19:28.655 "traddr": "$NVMF_FIRST_TARGET_IP", 01:19:28.655 "adrfam": "ipv4", 01:19:28.655 "trsvcid": "$NVMF_PORT", 01:19:28.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:19:28.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:19:28.655 "hdgst": ${hdgst:-false}, 01:19:28.655 "ddgst": ${ddgst:-false} 01:19:28.655 }, 01:19:28.655 "method": "bdev_nvme_attach_controller" 01:19:28.655 } 01:19:28.655 EOF 01:19:28.655 )") 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 01:19:28.655 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:19:28.655 "params": { 01:19:28.655 "name": "Nvme1", 01:19:28.655 "trtype": "tcp", 01:19:28.655 "traddr": "10.0.0.3", 01:19:28.655 "adrfam": "ipv4", 01:19:28.655 "trsvcid": "4420", 01:19:28.655 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:19:28.655 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:19:28.655 "hdgst": false, 01:19:28.655 "ddgst": false 01:19:28.655 }, 01:19:28.655 "method": "bdev_nvme_attach_controller" 01:19:28.655 }' 01:19:28.655 [2024-12-09 05:14:20.258370] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:19:28.655 [2024-12-09 05:14:20.258428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68441 ] 01:19:28.918 [2024-12-09 05:14:20.394711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:28.918 [2024-12-09 05:14:20.439621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:29.178 Running I/O for 10 seconds... 01:19:31.087 6979.00 IOPS, 54.52 MiB/s [2024-12-09T05:14:23.646Z] 7770.00 IOPS, 60.70 MiB/s [2024-12-09T05:14:25.045Z] 8245.33 IOPS, 64.42 MiB/s [2024-12-09T05:14:25.618Z] 8484.50 IOPS, 66.29 MiB/s [2024-12-09T05:14:27.004Z] 8639.40 IOPS, 67.50 MiB/s [2024-12-09T05:14:27.951Z] 8736.00 IOPS, 68.25 MiB/s [2024-12-09T05:14:28.891Z] 8804.29 IOPS, 68.78 MiB/s [2024-12-09T05:14:29.829Z] 8854.12 IOPS, 69.17 MiB/s [2024-12-09T05:14:30.766Z] 8886.56 IOPS, 69.43 MiB/s [2024-12-09T05:14:30.766Z] 8908.60 IOPS, 69.60 MiB/s 01:19:39.149 Latency(us) 01:19:39.149 [2024-12-09T05:14:30.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:19:39.149 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 01:19:39.149 Verification LBA range: start 0x0 length 0x1000 01:19:39.149 Nvme1n1 : 10.01 8912.02 69.63 0.00 0.00 14322.00 1676.21 26416.05 01:19:39.149 [2024-12-09T05:14:30.766Z] =================================================================================================================== 01:19:39.149 [2024-12-09T05:14:30.766Z] Total : 8912.02 69.63 0.00 0.00 14322.00 1676.21 26416.05 01:19:39.149 05:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=68564 01:19:39.149 05:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 01:19:39.149 05:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:19:39.149 05:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 01:19:39.149 05:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 01:19:39.149 05:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 01:19:39.149 05:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 01:19:39.149 05:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:19:39.149 05:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:19:39.149 { 01:19:39.149 "params": { 01:19:39.149 "name": "Nvme$subsystem", 01:19:39.149 "trtype": "$TEST_TRANSPORT", 01:19:39.149 "traddr": "$NVMF_FIRST_TARGET_IP", 01:19:39.149 "adrfam": "ipv4", 01:19:39.149 "trsvcid": "$NVMF_PORT", 01:19:39.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:19:39.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:19:39.149 "hdgst": ${hdgst:-false}, 01:19:39.149 "ddgst": ${ddgst:-false} 01:19:39.149 }, 01:19:39.149 "method": "bdev_nvme_attach_controller" 01:19:39.149 } 01:19:39.149 EOF 01:19:39.149 )") 01:19:39.149 05:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 01:19:39.149 [2024-12-09 05:14:30.757141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.149 [2024-12-09 05:14:30.757169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.149 05:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 01:19:39.149 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.149 05:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 01:19:39.149 05:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:19:39.149 "params": { 01:19:39.149 "name": "Nvme1", 01:19:39.149 "trtype": "tcp", 01:19:39.149 "traddr": "10.0.0.3", 01:19:39.149 "adrfam": "ipv4", 01:19:39.149 "trsvcid": "4420", 01:19:39.149 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:19:39.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:19:39.149 "hdgst": false, 01:19:39.149 "ddgst": false 01:19:39.149 }, 01:19:39.149 "method": "bdev_nvme_attach_controller" 01:19:39.149 }' 01:19:39.409 [2024-12-09 05:14:30.769118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.409 [2024-12-09 05:14:30.769131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.409 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.409 [2024-12-09 05:14:30.777114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.409 [2024-12-09 05:14:30.777125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.409 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.409 [2024-12-09 05:14:30.785115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.409 [2024-12-09 05:14:30.785126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.409 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.409 [2024-12-09 05:14:30.789654] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:19:39.409 [2024-12-09 05:14:30.790034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68564 ] 01:19:39.409 [2024-12-09 05:14:30.793116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.409 [2024-12-09 05:14:30.793265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.409 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.409 [2024-12-09 05:14:30.801126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.409 [2024-12-09 05:14:30.801228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.409 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.409 [2024-12-09 05:14:30.809124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.409 [2024-12-09 05:14:30.809219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.409 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.409 [2024-12-09 05:14:30.817132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.409 [2024-12-09 05:14:30.817229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.409 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.409 [2024-12-09 05:14:30.825127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.409 [2024-12-09 05:14:30.825221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.409 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.409 [2024-12-09 05:14:30.833128] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.409 [2024-12-09 05:14:30.833223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.409 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.409 [2024-12-09 05:14:30.841130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.410 [2024-12-09 05:14:30.841225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.410 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.410 [2024-12-09 05:14:30.849132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.410 [2024-12-09 05:14:30.849232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.410 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.410 [2024-12-09 05:14:30.861135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.410 [2024-12-09 05:14:30.861235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.410 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.410 [2024-12-09 05:14:30.869135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.410 [2024-12-09 05:14:30.869228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.410 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.410 [2024-12-09 05:14:30.877138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.410 [2024-12-09 05:14:30.877230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.410 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.410 [2024-12-09 05:14:30.885140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.410 [2024-12-09 05:14:30.885234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.410 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.410 [2024-12-09 05:14:30.893141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.410 [2024-12-09 05:14:30.893151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.410 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.410 [2024-12-09 05:14:30.901140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.410 [2024-12-09 05:14:30.901154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.410 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.410 [2024-12-09 05:14:30.909140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.410 [2024-12-09 05:14:30.909153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.410 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.410 [2024-12-09 05:14:30.917141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.410 [2024-12-09 05:14:30.917155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.410 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.410 [2024-12-09 05:14:30.924015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:39.410 [2024-12-09 05:14:30.925145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.410 [2024-12-09 05:14:30.925156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.410 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.410 [2024-12-09 05:14:30.933147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.410 [2024-12-09 05:14:30.933217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.410 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.410 [2024-12-09 05:14:30.941150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.410 [2024-12-09 05:14:30.941209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.410 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.410 [2024-12-09 05:14:30.949153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.410 [2024-12-09 05:14:30.949214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.410 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.410 [2024-12-09 05:14:30.953860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:39.410 [2024-12-09 05:14:30.957154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.410 [2024-12-09 05:14:30.957210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.410 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.410 [2024-12-09 05:14:30.965157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.410 [2024-12-09 05:14:30.965215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.410 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.410 [2024-12-09 05:14:30.973158] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.410 [2024-12-09 05:14:30.973219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.410 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.410 [2024-12-09 05:14:30.981161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.410 [2024-12-09 05:14:30.981222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.410 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.410 [2024-12-09 05:14:30.993164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.410 [2024-12-09 05:14:30.993227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.410 2024/12/09 05:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.410 [2024-12-09 05:14:31.001164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.410 [2024-12-09 05:14:31.001226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.410 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.410 [2024-12-09 05:14:31.009164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.410 [2024-12-09 05:14:31.009222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.410 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.410 [2024-12-09 05:14:31.017165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.410 [2024-12-09 05:14:31.017222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.410 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.669 [2024-12-09 05:14:31.025180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.669 [2024-12-09 05:14:31.025256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.669 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.669 [2024-12-09 05:14:31.033173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.669 [2024-12-09 05:14:31.033243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.669 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.669 [2024-12-09 05:14:31.041175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.669 [2024-12-09 05:14:31.041240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.669 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.669 [2024-12-09 05:14:31.049179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.669 [2024-12-09 05:14:31.049251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.669 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.669 [2024-12-09 05:14:31.057180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.669 [2024-12-09 05:14:31.057248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.669 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.669 [2024-12-09 05:14:31.065184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.669 [2024-12-09 05:14:31.065250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.669 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.669 [2024-12-09 05:14:31.073182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.669 [2024-12-09 05:14:31.073240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.669 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.669 [2024-12-09 05:14:31.081189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.669 [2024-12-09 05:14:31.081208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.669 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.669 [2024-12-09 05:14:31.089185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.669 [2024-12-09 05:14:31.089199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.669 Running I/O for 5 seconds... 01:19:39.669 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.669 [2024-12-09 05:14:31.097188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.669 [2024-12-09 05:14:31.097201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.669 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.669 [2024-12-09 05:14:31.108293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.669 [2024-12-09 05:14:31.108315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.669 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.669 [2024-12-09 05:14:31.117228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.669 [2024-12-09 05:14:31.117249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.669 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.669 [2024-12-09 05:14:31.126530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.669 [2024-12-09 05:14:31.126551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.669 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.669 [2024-12-09 05:14:31.135596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.669 [2024-12-09 05:14:31.135616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.669 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.669 [2024-12-09 05:14:31.144974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.669 [2024-12-09 05:14:31.144994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.669 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.669 [2024-12-09 05:14:31.154019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.669 [2024-12-09 05:14:31.154040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.669 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.669 [2024-12-09 05:14:31.163278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.669 [2024-12-09 05:14:31.163368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.669 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.669 [2024-12-09 05:14:31.172426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.669 [2024-12-09 05:14:31.172447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.669 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.669 [2024-12-09 05:14:31.181832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.669 [2024-12-09 05:14:31.181914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.670 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.670 [2024-12-09 05:14:31.190912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.670 [2024-12-09 05:14:31.190932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.670 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.670 [2024-12-09 05:14:31.199921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.670 [2024-12-09 05:14:31.199941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.670 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.670 [2024-12-09 05:14:31.208992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.670 [2024-12-09 05:14:31.209074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.670 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.670 [2024-12-09 05:14:31.218395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.670 [2024-12-09 05:14:31.218416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.670 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.670 [2024-12-09 05:14:31.227845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.670 [2024-12-09 05:14:31.227924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.670 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.670 [2024-12-09 05:14:31.236852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.670 [2024-12-09 05:14:31.236871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.670 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.670 [2024-12-09 05:14:31.246083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.670 [2024-12-09 05:14:31.246104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.670 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.670 [2024-12-09 05:14:31.254851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.670 [2024-12-09 05:14:31.254935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.670 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.670 [2024-12-09 05:14:31.264797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.670 [2024-12-09 05:14:31.264816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.670 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.670 [2024-12-09 05:14:31.273962] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.670 [2024-12-09 05:14:31.274040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.670 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.670 [2024-12-09 05:14:31.283451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.670 [2024-12-09 05:14:31.283472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.292958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.293039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.302016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.302037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.310992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.311012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.320196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.320275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.329105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.329125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.338062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.338082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.346934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.347014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.356216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.356237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.365771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.365847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.374085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.374106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.383418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.383440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.396711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.396730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.405029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.405050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.413816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.413836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.423129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.423148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.432051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.432134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.441028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.441049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.450397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.450418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.459346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.459429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.468797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.468817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.477778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.477857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.487036] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.487059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.496338] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.496358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.505314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.505393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.514389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.514410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.523341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.523363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.929 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.929 [2024-12-09 05:14:31.532323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.929 [2024-12-09 05:14:31.532406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:39.930 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:39.930 [2024-12-09 05:14:31.541637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:39.930 [2024-12-09 05:14:31.541656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.550731] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.550810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.564917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.564938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.573441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.573461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.582913] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.582933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.592039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.592059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.601537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.601558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.610486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.610574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.624173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.624193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.633558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.633587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.642521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.642543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.651384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.651404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.660593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.660612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.670189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.670210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.680085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.680168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.689132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.689153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.698514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.698534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.707953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.707974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.716823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.716842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.725884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.725904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.735365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.735448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.744323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.744344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.753323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.753344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.762891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.762911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.771754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.771775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.780672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.780692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.794507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.794601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.191 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.191 [2024-12-09 05:14:31.803709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.191 [2024-12-09 05:14:31.803730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:31.812696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:31.812717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:31.822175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:31.822257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:31.831006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:31.831026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:31.839971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:31.839992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:31.849432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:31.849512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:31.858645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:31.858666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:31.868157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:31.868177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:31.877063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:31.877146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:31.886377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:31.886398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:31.900376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:31.900456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:31.909457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:31.909478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:31.918168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:31.918188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:31.926994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:31.927073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:31.936644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:31.936664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:31.945387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:31.945407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:31.954168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:31.954249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:31.963063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:31.963085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:31.971990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:31.972010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:31.980763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:31.980858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:31.989686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:31.989704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:31.998899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:31.998920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:32.008024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:32.008104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:32.016795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:32.016816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:32.025520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:32.025540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:32.034343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:32.034365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:32.043902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:32.043922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:32.052803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:32.052822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.453 [2024-12-09 05:14:32.061676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.453 [2024-12-09 05:14:32.061695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.453 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.713 [2024-12-09 05:14:32.071006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.713 [2024-12-09 05:14:32.071088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.713 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.713 [2024-12-09 05:14:32.080535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.713 [2024-12-09 05:14:32.080557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.713 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.713 [2024-12-09 05:14:32.089387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.713 [2024-12-09 05:14:32.089407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.713 17898.00 IOPS, 139.83 MiB/s [2024-12-09T05:14:32.330Z] 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.713 [2024-12-09 05:14:32.098735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.713 [2024-12-09 05:14:32.098754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.713 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.713 [2024-12-09 05:14:32.107672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.713 [2024-12-09 05:14:32.107692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.713 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.713 [2024-12-09 05:14:32.117498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.713 [2024-12-09 05:14:32.117519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.713 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.713 [2024-12-09 05:14:32.126467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.713 [2024-12-09 05:14:32.126489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.713 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.713 [2024-12-09 05:14:32.135532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.713 [2024-12-09 05:14:32.135554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.713 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.713 [2024-12-09 05:14:32.144518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.713 [2024-12-09 05:14:32.144540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.713 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.713 [2024-12-09 05:14:32.154271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.713 [2024-12-09 05:14:32.154291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.713 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.713 [2024-12-09 05:14:32.163334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.713 [2024-12-09 05:14:32.163355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.713 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.713 [2024-12-09 05:14:32.172908] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.713 [2024-12-09 05:14:32.172928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.713 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.713 [2024-12-09 05:14:32.181887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.713 [2024-12-09 05:14:32.181907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.713 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.713 [2024-12-09 05:14:32.190820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.713 [2024-12-09 05:14:32.190839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.713 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.713 [2024-12-09 05:14:32.200271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.713 [2024-12-09 05:14:32.200358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.713 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.713 [2024-12-09 05:14:32.209897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.713 [2024-12-09 05:14:32.209916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.713 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.713 [2024-12-09 05:14:32.218741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.713 [2024-12-09 05:14:32.218823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.713 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.713 [2024-12-09 05:14:32.228306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.713 [2024-12-09 05:14:32.228326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.713 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.713 [2024-12-09 05:14:32.237645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.714 [2024-12-09 05:14:32.237664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.714 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.714 [2024-12-09 05:14:32.246333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.714 [2024-12-09 05:14:32.246354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.714 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.714 [2024-12-09 05:14:32.255467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.714 [2024-12-09 05:14:32.255488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.714 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.714 [2024-12-09 05:14:32.264779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.714 [2024-12-09 05:14:32.264859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.714 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.714 [2024-12-09 05:14:32.273620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.714 [2024-12-09 05:14:32.273640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.714 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.714 [2024-12-09 05:14:32.282498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.714 [2024-12-09 05:14:32.282520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.714 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.714 [2024-12-09 05:14:32.291450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.714 [2024-12-09 05:14:32.291471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.714 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.714 [2024-12-09 05:14:32.300487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.714 [2024-12-09 05:14:32.300571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.714 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.714 [2024-12-09 05:14:32.309869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.714 [2024-12-09 05:14:32.309890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.714 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.714 [2024-12-09 05:14:32.319295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.714 [2024-12-09 05:14:32.319315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.714 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.972 [2024-12-09 05:14:32.328144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.972 [2024-12-09 05:14:32.328226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.972 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.972 [2024-12-09 05:14:32.341478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.972 [2024-12-09 05:14:32.341499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.972 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.972 [2024-12-09 05:14:32.349922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.972 [2024-12-09 05:14:32.349941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.972 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.972 [2024-12-09 05:14:32.358779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.972 [2024-12-09 05:14:32.358859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.972 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.972 [2024-12-09 05:14:32.372421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.972 [2024-12-09 05:14:32.372442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.972 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.972 [2024-12-09 05:14:32.381365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.972 [2024-12-09 05:14:32.381386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.972 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.972 [2024-12-09 05:14:32.394590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.972 [2024-12-09 05:14:32.394611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.972 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.972 [2024-12-09 05:14:32.402928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.972 [2024-12-09 05:14:32.402949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.972 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.972 [2024-12-09 05:14:32.411919] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.972 [2024-12-09 05:14:32.411940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.972 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.972 [2024-12-09 05:14:32.421450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.972 [2024-12-09 05:14:32.421471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.972 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.972 [2024-12-09 05:14:32.430206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.973 [2024-12-09 05:14:32.430289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.973 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.973 [2024-12-09 05:14:32.439657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.973 [2024-12-09 05:14:32.439677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.973 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.973 [2024-12-09 05:14:32.448687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.973 [2024-12-09 05:14:32.448779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.973 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.973 [2024-12-09 05:14:32.457666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.973 [2024-12-09 05:14:32.457685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.973 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.973 [2024-12-09 05:14:32.466461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.973 [2024-12-09 05:14:32.466481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.973 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.973 [2024-12-09 05:14:32.475921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.973 [2024-12-09 05:14:32.476000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.973 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.973 [2024-12-09 05:14:32.484943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.973 [2024-12-09 05:14:32.484964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.973 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.973 [2024-12-09 05:14:32.493885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.973 [2024-12-09 05:14:32.493905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.973 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.973 [2024-12-09 05:14:32.503119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.973 [2024-12-09 05:14:32.503198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.973 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.973 [2024-12-09 05:14:32.512226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.973 [2024-12-09 05:14:32.512246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.973 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.973 [2024-12-09 05:14:32.521193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.973 [2024-12-09 05:14:32.521214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.973 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.973 [2024-12-09 05:14:32.530009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.973 [2024-12-09 05:14:32.530090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.973 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.973 [2024-12-09 05:14:32.539013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.973 [2024-12-09 05:14:32.539033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.973 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.973 [2024-12-09 05:14:32.548636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.973 [2024-12-09 05:14:32.548656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.973 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.973 [2024-12-09 05:14:32.558253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.973 [2024-12-09 05:14:32.558274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.973 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.973 [2024-12-09 05:14:32.572543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.973 [2024-12-09 05:14:32.572626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:40.973 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:40.973 [2024-12-09 05:14:32.586147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:40.973 [2024-12-09 05:14:32.586168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.595357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.232 [2024-12-09 05:14:32.595380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.604926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.232 [2024-12-09 05:14:32.604945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.614405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.232 [2024-12-09 05:14:32.614427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.623588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.232 [2024-12-09 05:14:32.623609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.633014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.232 [2024-12-09 05:14:32.633035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.641895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.232 [2024-12-09 05:14:32.641916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.650976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.232 [2024-12-09 05:14:32.651062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.660256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.232 [2024-12-09 05:14:32.660278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.669730] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.232 [2024-12-09 05:14:32.669810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.678729] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.232 [2024-12-09 05:14:32.678749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.687705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.232 [2024-12-09 05:14:32.687725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.697186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.232 [2024-12-09 05:14:32.697269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.706386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.232 [2024-12-09 05:14:32.706407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.719832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.232 [2024-12-09 05:14:32.719911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.728368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.232 [2024-12-09 05:14:32.728388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.737960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.232 [2024-12-09 05:14:32.737982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.747595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.232 [2024-12-09 05:14:32.747615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.761403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.232 [2024-12-09 05:14:32.761424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.769664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.232 [2024-12-09 05:14:32.769683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.778767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.232 [2024-12-09 05:14:32.778848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.788120] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.232 [2024-12-09 05:14:32.788140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.797058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.232 [2024-12-09 05:14:32.797079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.232 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.232 [2024-12-09 05:14:32.806065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.233 [2024-12-09 05:14:32.806145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.233 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.233 [2024-12-09 05:14:32.815724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.233 [2024-12-09 05:14:32.815744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.233 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.233 [2024-12-09 05:14:32.824942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.233 [2024-12-09 05:14:32.825022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.233 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.233 [2024-12-09 05:14:32.833997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.233 [2024-12-09 05:14:32.834019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.233 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.233 [2024-12-09 05:14:32.843236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.233 [2024-12-09 05:14:32.843256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.233 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.491 [2024-12-09 05:14:32.852390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:32.852473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:32.861965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:32.861986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:32.871623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:32.871642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:32.880460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:32.880481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:32.889285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:32.889306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:32.898442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:32.898463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:32.907457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:32.907540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:32.916809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:32.916835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:32.926248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:32.926268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:32.935406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:32.935488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:32.944944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:32.944964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:32.954385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:32.954466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:32.964002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:32.964022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:32.973405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:32.973483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:32.982698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:32.982717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:32.991668] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:32.991687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:33.001187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:33.001268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:33.010708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:33.010727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:33.020181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:33.020258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:33.029602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:33.029621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:33.038463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:33.038483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:33.047776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:33.047796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:33.056783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:33.056805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.492 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.492 [2024-12-09 05:14:33.065543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.492 [2024-12-09 05:14:33.065570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.493 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.493 [2024-12-09 05:14:33.074407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.493 [2024-12-09 05:14:33.074429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.493 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.493 [2024-12-09 05:14:33.083852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.493 [2024-12-09 05:14:33.083872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.493 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.493 17952.50 IOPS, 140.25 MiB/s [2024-12-09T05:14:33.110Z] [2024-12-09 05:14:33.093003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.493 [2024-12-09 05:14:33.093023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.493 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.493 [2024-12-09 05:14:33.101877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.493 [2024-12-09 05:14:33.101897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.493 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.753 [2024-12-09 05:14:33.110803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.753 [2024-12-09 05:14:33.110883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.753 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.753 [2024-12-09 05:14:33.120102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.753 [2024-12-09 05:14:33.120122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.753 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.753 [2024-12-09 05:14:33.128932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.753 [2024-12-09 05:14:33.128952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.753 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.753 [2024-12-09 05:14:33.138119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.753 [2024-12-09 05:14:33.138196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.753 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.753 [2024-12-09 05:14:33.147110] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.753 [2024-12-09 05:14:33.147132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.753 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.753 [2024-12-09 05:14:33.155860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.753 [2024-12-09 05:14:33.155880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.753 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.753 [2024-12-09 05:14:33.165184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.753 [2024-12-09 05:14:33.165265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.753 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.753 [2024-12-09 05:14:33.173998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.753 [2024-12-09 05:14:33.174020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.753 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.753 [2024-12-09 05:14:33.183256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.753 [2024-12-09 05:14:33.183276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.753 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.753 [2024-12-09 05:14:33.192026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.753 [2024-12-09 05:14:33.192104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.753 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.753 [2024-12-09 05:14:33.200822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.753 [2024-12-09 05:14:33.200841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.753 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.753 [2024-12-09 05:14:33.210164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.753 [2024-12-09 05:14:33.210186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.753 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.753 [2024-12-09 05:14:33.219447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.753 [2024-12-09 05:14:33.219527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.753 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.753 [2024-12-09 05:14:33.228445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.753 [2024-12-09 05:14:33.228467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.754 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.754 [2024-12-09 05:14:33.237408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.754 [2024-12-09 05:14:33.237430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.754 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.754 [2024-12-09 05:14:33.251429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.754 [2024-12-09 05:14:33.251512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.754 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.754 [2024-12-09 05:14:33.259991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.754 [2024-12-09 05:14:33.260012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.754 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.754 [2024-12-09 05:14:33.269050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.754 [2024-12-09 05:14:33.269070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.754 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.754 [2024-12-09 05:14:33.278027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.754 [2024-12-09 05:14:33.278047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.754 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.754 [2024-12-09 05:14:33.286663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.754 [2024-12-09 05:14:33.286743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.754 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.754 [2024-12-09 05:14:33.295398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.754 [2024-12-09 05:14:33.295420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.754 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.754 [2024-12-09 05:14:33.304708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.754 [2024-12-09 05:14:33.304727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.754 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.754 [2024-12-09 05:14:33.314258] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.754 [2024-12-09 05:14:33.314338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.754 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.754 [2024-12-09 05:14:33.323323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.754 [2024-12-09 05:14:33.323344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.754 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.754 [2024-12-09 05:14:33.333163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.754 [2024-12-09 05:14:33.333241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.754 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.754 [2024-12-09 05:14:33.342329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.754 [2024-12-09 05:14:33.342350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.754 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.754 [2024-12-09 05:14:33.351139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.754 [2024-12-09 05:14:33.351160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.754 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:41.754 [2024-12-09 05:14:33.360624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:41.754 [2024-12-09 05:14:33.360643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:41.754 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.016 [2024-12-09 05:14:33.370102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.016 [2024-12-09 05:14:33.370121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.016 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.016 [2024-12-09 05:14:33.379397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.016 [2024-12-09 05:14:33.379417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.016 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.016 [2024-12-09 05:14:33.388295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.016 [2024-12-09 05:14:33.388376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.016 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.016 [2024-12-09 05:14:33.397585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.016 [2024-12-09 05:14:33.397604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.406882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.406960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.416400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.416421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.425196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.425216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.434062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.434140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.447544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.447584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.455911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.455932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.464781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.464860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.473977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.473997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.483454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.483475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.492362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.492441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.501224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.501244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.510191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.510212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.518995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.519072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.528545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.528578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.537810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.537889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.547484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.547504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.556738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.556824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.570374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.570394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.578747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.578767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.587549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.587576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.597057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.597077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.606734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.606753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.615621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.615640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.017 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.017 [2024-12-09 05:14:33.624678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.017 [2024-12-09 05:14:33.624704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.018 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.279 [2024-12-09 05:14:33.634206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.279 [2024-12-09 05:14:33.634226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.279 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.279 [2024-12-09 05:14:33.643576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.279 [2024-12-09 05:14:33.643595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.279 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.279 [2024-12-09 05:14:33.652609] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.279 [2024-12-09 05:14:33.652628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.279 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.279 [2024-12-09 05:14:33.662266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.279 [2024-12-09 05:14:33.662287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.671909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.671929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.680244] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.680264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.689693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.689713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.699045] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.699130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.708138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.708159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.717087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.717107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.726637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.726657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.735851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.735871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.744952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.744971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.754621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.754640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.764039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.764059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.772960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.772980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.781947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.782030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.790899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.790919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.799726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.799746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.809105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.809182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.818496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.818518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.827451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.827472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.836251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.836272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.845300] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.845320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.854064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.854084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.863393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.863474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.872366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.872388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.280 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.280 [2024-12-09 05:14:33.881575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.280 [2024-12-09 05:14:33.881595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.281 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.281 [2024-12-09 05:14:33.890269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.281 [2024-12-09 05:14:33.890290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.281 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.541 [2024-12-09 05:14:33.899639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.541 [2024-12-09 05:14:33.899659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.541 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.541 [2024-12-09 05:14:33.908630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.541 [2024-12-09 05:14:33.908713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.541 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.541 [2024-12-09 05:14:33.917651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.541 [2024-12-09 05:14:33.917728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.541 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.541 [2024-12-09 05:14:33.926621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.541 [2024-12-09 05:14:33.926697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.541 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.541 [2024-12-09 05:14:33.934973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.541 [2024-12-09 05:14:33.935048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.541 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.541 [2024-12-09 05:14:33.944240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.541 [2024-12-09 05:14:33.944261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.541 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.541 [2024-12-09 05:14:33.953448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.541 [2024-12-09 05:14:33.953524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.541 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.541 [2024-12-09 05:14:33.961735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.541 [2024-12-09 05:14:33.961756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.541 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.541 [2024-12-09 05:14:33.971175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.541 [2024-12-09 05:14:33.971195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.541 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.541 [2024-12-09 05:14:33.980647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.541 [2024-12-09 05:14:33.980666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.542 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.542 [2024-12-09 05:14:33.989888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.542 [2024-12-09 05:14:33.989908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.542 2024/12/09 05:14:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.542 [2024-12-09 05:14:33.998848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.542 [2024-12-09 05:14:33.998867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.542 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.542 [2024-12-09 05:14:34.007825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.542 [2024-12-09 05:14:34.007845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.542 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.542 [2024-12-09 05:14:34.017297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.542 [2024-12-09 05:14:34.017317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.542 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.542 [2024-12-09 05:14:34.026916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.542 [2024-12-09 05:14:34.027001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.542 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.542 [2024-12-09 05:14:34.036089] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.542 [2024-12-09 05:14:34.036110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.542 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.542 [2024-12-09 05:14:34.045234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.542 [2024-12-09 05:14:34.045255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.542 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.542 [2024-12-09 05:14:34.054325] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.542 [2024-12-09 05:14:34.054406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.542 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.542 [2024-12-09 05:14:34.063832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.542 [2024-12-09 05:14:34.063852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.542 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.542 [2024-12-09 05:14:34.072798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.542 [2024-12-09 05:14:34.072817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.542 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.542 [2024-12-09 05:14:34.081622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.542 [2024-12-09 05:14:34.081641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.542 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.542 [2024-12-09 05:14:34.090549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.542 [2024-12-09 05:14:34.090573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.542 17973.33 IOPS, 140.42 MiB/s [2024-12-09T05:14:34.159Z] 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.542 [2024-12-09 05:14:34.100140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.542 [2024-12-09 05:14:34.100224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.542 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.542 [2024-12-09 05:14:34.109039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.542 [2024-12-09 05:14:34.109059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.542 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.542 [2024-12-09 05:14:34.118001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.542 [2024-12-09 05:14:34.118021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.542 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.542 [2024-12-09 05:14:34.127139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.542 [2024-12-09 05:14:34.127219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.542 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.542 [2024-12-09 05:14:34.135993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.542 [2024-12-09 05:14:34.136014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.542 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.542 [2024-12-09 05:14:34.150440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.542 [2024-12-09 05:14:34.150519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.542 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.827 [2024-12-09 05:14:34.163700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.827 [2024-12-09 05:14:34.163721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.827 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.827 [2024-12-09 05:14:34.172613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.827 [2024-12-09 05:14:34.172633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.827 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.827 [2024-12-09 05:14:34.181657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.827 [2024-12-09 05:14:34.181677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.827 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.827 [2024-12-09 05:14:34.190990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.827 [2024-12-09 05:14:34.191010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.827 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.827 [2024-12-09 05:14:34.200023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.827 [2024-12-09 05:14:34.200044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.827 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.827 [2024-12-09 05:14:34.209215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.827 [2024-12-09 05:14:34.209297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.827 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.827 [2024-12-09 05:14:34.218942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.827 [2024-12-09 05:14:34.218963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.828 [2024-12-09 05:14:34.227898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.828 [2024-12-09 05:14:34.227918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.828 [2024-12-09 05:14:34.236827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.828 [2024-12-09 05:14:34.236847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.828 [2024-12-09 05:14:34.245718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.828 [2024-12-09 05:14:34.245738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.828 [2024-12-09 05:14:34.255285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.828 [2024-12-09 05:14:34.255367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.828 [2024-12-09 05:14:34.264343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.828 [2024-12-09 05:14:34.264363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.828 [2024-12-09 05:14:34.273682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.828 [2024-12-09 05:14:34.273761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.828 [2024-12-09 05:14:34.283509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.828 [2024-12-09 05:14:34.283530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.828 [2024-12-09 05:14:34.293181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.828 [2024-12-09 05:14:34.293263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.828 [2024-12-09 05:14:34.302213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.828 [2024-12-09 05:14:34.302234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.828 [2024-12-09 05:14:34.311927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.828 [2024-12-09 05:14:34.312006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.828 [2024-12-09 05:14:34.321432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.828 [2024-12-09 05:14:34.321454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.828 [2024-12-09 05:14:34.330758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.828 [2024-12-09 05:14:34.330839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.828 [2024-12-09 05:14:34.339720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.828 [2024-12-09 05:14:34.339739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.828 [2024-12-09 05:14:34.348822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.828 [2024-12-09 05:14:34.348842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.828 [2024-12-09 05:14:34.358153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.828 [2024-12-09 05:14:34.358174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.828 [2024-12-09 05:14:34.367148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.828 [2024-12-09 05:14:34.367168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.828 [2024-12-09 05:14:34.376230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.828 [2024-12-09 05:14:34.376251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.828 [2024-12-09 05:14:34.385548] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.828 [2024-12-09 05:14:34.385638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.828 [2024-12-09 05:14:34.395185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.828 [2024-12-09 05:14:34.395206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.828 [2024-12-09 05:14:34.404357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.828 [2024-12-09 05:14:34.404378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.828 [2024-12-09 05:14:34.418137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.828 [2024-12-09 05:14:34.418157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:42.828 [2024-12-09 05:14:34.426692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:42.828 [2024-12-09 05:14:34.426712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:42.828 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.088 [2024-12-09 05:14:34.435936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.088 [2024-12-09 05:14:34.435958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.088 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.088 [2024-12-09 05:14:34.445432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.088 [2024-12-09 05:14:34.445515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.088 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.088 [2024-12-09 05:14:34.454665] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.088 [2024-12-09 05:14:34.454684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.088 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.088 [2024-12-09 05:14:34.464248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.088 [2024-12-09 05:14:34.464268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.088 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.088 [2024-12-09 05:14:34.473838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.088 [2024-12-09 05:14:34.473858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.088 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.088 [2024-12-09 05:14:34.483197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.088 [2024-12-09 05:14:34.483218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.088 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.088 [2024-12-09 05:14:34.492266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.088 [2024-12-09 05:14:34.492286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.088 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.088 [2024-12-09 05:14:34.501272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.088 [2024-12-09 05:14:34.501356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.088 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.088 [2024-12-09 05:14:34.511165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.088 [2024-12-09 05:14:34.511185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.088 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.088 [2024-12-09 05:14:34.520410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.088 [2024-12-09 05:14:34.520491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.088 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.088 [2024-12-09 05:14:34.529802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.088 [2024-12-09 05:14:34.529822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.088 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.088 [2024-12-09 05:14:34.543693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.088 [2024-12-09 05:14:34.543713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.088 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.088 [2024-12-09 05:14:34.552373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.088 [2024-12-09 05:14:34.552394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.088 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.088 [2024-12-09 05:14:34.561546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.088 [2024-12-09 05:14:34.561575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.088 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.088 [2024-12-09 05:14:34.570744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.088 [2024-12-09 05:14:34.570829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.088 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.088 [2024-12-09 05:14:34.579773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.088 [2024-12-09 05:14:34.579794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.088 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.088 [2024-12-09 05:14:34.588816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.088 [2024-12-09 05:14:34.588835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.088 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.088 [2024-12-09 05:14:34.598429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.088 [2024-12-09 05:14:34.598450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.088 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.088 [2024-12-09 05:14:34.607591] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.088 [2024-12-09 05:14:34.607611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.088 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.089 [2024-12-09 05:14:34.616756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.089 [2024-12-09 05:14:34.616836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.089 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.089 [2024-12-09 05:14:34.625942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.089 [2024-12-09 05:14:34.625962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.089 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.089 [2024-12-09 05:14:34.635031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.089 [2024-12-09 05:14:34.635051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.089 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.089 [2024-12-09 05:14:34.644840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.089 [2024-12-09 05:14:34.644859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.089 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.089 [2024-12-09 05:14:34.654558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.089 [2024-12-09 05:14:34.654586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.089 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.089 [2024-12-09 05:14:34.663753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.089 [2024-12-09 05:14:34.663835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.089 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.089 [2024-12-09 05:14:34.672848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.089 [2024-12-09 05:14:34.672867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.089 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.089 [2024-12-09 05:14:34.682062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.089 [2024-12-09 05:14:34.682082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.089 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.089 [2024-12-09 05:14:34.691571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.089 [2024-12-09 05:14:34.691592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.089 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.089 [2024-12-09 05:14:34.700554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.089 [2024-12-09 05:14:34.700576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.347 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.347 [2024-12-09 05:14:34.709749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.347 [2024-12-09 05:14:34.709768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.347 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.347 [2024-12-09 05:14:34.718952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.347 [2024-12-09 05:14:34.719034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.347 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.347 [2024-12-09 05:14:34.727952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.347 [2024-12-09 05:14:34.727973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.347 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.347 [2024-12-09 05:14:34.736980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.347 [2024-12-09 05:14:34.737000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.347 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.347 [2024-12-09 05:14:34.746644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.347 [2024-12-09 05:14:34.746665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.347 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.347 [2024-12-09 05:14:34.760331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.347 [2024-12-09 05:14:34.760351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.347 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.347 [2024-12-09 05:14:34.773343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.347 [2024-12-09 05:14:34.773365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.347 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.347 [2024-12-09 05:14:34.782647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.347 [2024-12-09 05:14:34.782667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.347 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.347 [2024-12-09 05:14:34.791988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.347 [2024-12-09 05:14:34.792008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.347 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.347 [2024-12-09 05:14:34.801636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.347 [2024-12-09 05:14:34.801655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.347 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.347 [2024-12-09 05:14:34.810769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.347 [2024-12-09 05:14:34.810789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.347 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.347 [2024-12-09 05:14:34.819851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.347 [2024-12-09 05:14:34.819871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.347 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.347 [2024-12-09 05:14:34.828978] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.347 [2024-12-09 05:14:34.829064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.347 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.347 [2024-12-09 05:14:34.838226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.347 [2024-12-09 05:14:34.838247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.347 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.347 [2024-12-09 05:14:34.851811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.347 [2024-12-09 05:14:34.851892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.347 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.347 [2024-12-09 05:14:34.860294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.347 [2024-12-09 05:14:34.860315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.347 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.347 [2024-12-09 05:14:34.870022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.347 [2024-12-09 05:14:34.870043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.348 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.348 [2024-12-09 05:14:34.879368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.348 [2024-12-09 05:14:34.879453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.348 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.348 [2024-12-09 05:14:34.888395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.348 [2024-12-09 05:14:34.888416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.348 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.348 [2024-12-09 05:14:34.897553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.348 [2024-12-09 05:14:34.897576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.348 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.348 [2024-12-09 05:14:34.907107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.348 [2024-12-09 05:14:34.907127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.348 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.348 [2024-12-09 05:14:34.916317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.348 [2024-12-09 05:14:34.916338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.348 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.348 [2024-12-09 05:14:34.925494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.348 [2024-12-09 05:14:34.925582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.348 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.348 [2024-12-09 05:14:34.935066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.348 [2024-12-09 05:14:34.935087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.348 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.348 [2024-12-09 05:14:34.944399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.348 [2024-12-09 05:14:34.944420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.348 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.348 [2024-12-09 05:14:34.958865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.348 [2024-12-09 05:14:34.958887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.348 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.606 [2024-12-09 05:14:34.967431] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.606 [2024-12-09 05:14:34.967452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.606 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.606 [2024-12-09 05:14:34.975859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.606 [2024-12-09 05:14:34.975880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.606 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.606 [2024-12-09 05:14:34.985444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.606 [2024-12-09 05:14:34.985530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.606 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.606 [2024-12-09 05:14:34.994900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.606 [2024-12-09 05:14:34.994920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.606 2024/12/09 05:14:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.606 [2024-12-09 05:14:35.003991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.606 [2024-12-09 05:14:35.004012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.606 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.606 [2024-12-09 05:14:35.013011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.606 [2024-12-09 05:14:35.013093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.606 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.606 [2024-12-09 05:14:35.022068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.606 [2024-12-09 05:14:35.022089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.606 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.606 [2024-12-09 05:14:35.031717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.606 [2024-12-09 05:14:35.031800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.606 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.606 [2024-12-09 05:14:35.041228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.606 [2024-12-09 05:14:35.041248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.607 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.607 [2024-12-09 05:14:35.050202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.607 [2024-12-09 05:14:35.050224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.607 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.607 [2024-12-09 05:14:35.059337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.607 [2024-12-09 05:14:35.059419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.607 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.607 [2024-12-09 05:14:35.068635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.607 [2024-12-09 05:14:35.068657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.607 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.607 [2024-12-09 05:14:35.077814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.607 [2024-12-09 05:14:35.077835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.607 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.607 [2024-12-09 05:14:35.086708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.607 [2024-12-09 05:14:35.086731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.607 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.607 17885.50 IOPS, 139.73 MiB/s [2024-12-09T05:14:35.224Z] [2024-12-09 05:14:35.096148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.607 [2024-12-09 05:14:35.096171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.607 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.607 [2024-12-09 05:14:35.105097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.607 [2024-12-09 05:14:35.105120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.607 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.607 [2024-12-09 05:14:35.114281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.607 [2024-12-09 05:14:35.114304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.607 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.607 [2024-12-09 05:14:35.122650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.607 [2024-12-09 05:14:35.122673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.607 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.607 [2024-12-09 05:14:35.132216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.607 [2024-12-09 05:14:35.132237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.607 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.607 [2024-12-09 05:14:35.140524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.607 [2024-12-09 05:14:35.140548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.607 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.607 [2024-12-09 05:14:35.150180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.607 [2024-12-09 05:14:35.150202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.607 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.607 [2024-12-09 05:14:35.159381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.607 [2024-12-09 05:14:35.159405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.607 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.607 [2024-12-09 05:14:35.168450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.607 [2024-12-09 05:14:35.168473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.607 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.607 [2024-12-09 05:14:35.177674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.607 [2024-12-09 05:14:35.177696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.607 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.607 [2024-12-09 05:14:35.186818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.607 [2024-12-09 05:14:35.186841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.607 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.607 [2024-12-09 05:14:35.195762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.607 [2024-12-09 05:14:35.195784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.607 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.607 [2024-12-09 05:14:35.204945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.607 [2024-12-09 05:14:35.204966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.607 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.607 [2024-12-09 05:14:35.214285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.607 [2024-12-09 05:14:35.214306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.607 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.865 [2024-12-09 05:14:35.223485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.865 [2024-12-09 05:14:35.223508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.865 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.865 [2024-12-09 05:14:35.232641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.865 [2024-12-09 05:14:35.232662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.865 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.865 [2024-12-09 05:14:35.246555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.865 [2024-12-09 05:14:35.246584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.865 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.865 [2024-12-09 05:14:35.255911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.865 [2024-12-09 05:14:35.255935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.865 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.865 [2024-12-09 05:14:35.265190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.865 [2024-12-09 05:14:35.265211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.865 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.865 [2024-12-09 05:14:35.274187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.865 [2024-12-09 05:14:35.274209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.865 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.865 [2024-12-09 05:14:35.283932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.865 [2024-12-09 05:14:35.283956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.865 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.865 [2024-12-09 05:14:35.293235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.865 [2024-12-09 05:14:35.293257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.865 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.865 [2024-12-09 05:14:35.302685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.865 [2024-12-09 05:14:35.302707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.865 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.865 [2024-12-09 05:14:35.311624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.865 [2024-12-09 05:14:35.311646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.865 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.865 [2024-12-09 05:14:35.320513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.865 [2024-12-09 05:14:35.320535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.866 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.866 [2024-12-09 05:14:35.329578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.866 [2024-12-09 05:14:35.329600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.866 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.866 [2024-12-09 05:14:35.338823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.866 [2024-12-09 05:14:35.338845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.866 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.866 [2024-12-09 05:14:35.348559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.866 [2024-12-09 05:14:35.348590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.866 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.866 [2024-12-09 05:14:35.357590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.866 [2024-12-09 05:14:35.357611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.866 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.866 [2024-12-09 05:14:35.366682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.866 [2024-12-09 05:14:35.366704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.866 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.866 [2024-12-09 05:14:35.375867] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.866 [2024-12-09 05:14:35.375889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.866 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.866 [2024-12-09 05:14:35.384891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.866 [2024-12-09 05:14:35.384914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.866 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.866 [2024-12-09 05:14:35.394032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.866 [2024-12-09 05:14:35.394056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.866 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.866 [2024-12-09 05:14:35.403296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.866 [2024-12-09 05:14:35.403319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.866 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.866 [2024-12-09 05:14:35.417547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.866 [2024-12-09 05:14:35.417577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.866 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.866 [2024-12-09 05:14:35.426475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.866 [2024-12-09 05:14:35.426576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.866 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.866 [2024-12-09 05:14:35.435740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.866 [2024-12-09 05:14:35.435762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.866 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.866 [2024-12-09 05:14:35.445003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.866 [2024-12-09 05:14:35.445024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.866 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.866 [2024-12-09 05:14:35.454549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.866 [2024-12-09 05:14:35.454644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.866 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.866 [2024-12-09 05:14:35.464180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.866 [2024-12-09 05:14:35.464260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.866 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:43.866 [2024-12-09 05:14:35.473495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:43.866 [2024-12-09 05:14:35.473576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:43.866 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.124 [2024-12-09 05:14:35.482578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.124 [2024-12-09 05:14:35.482597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.124 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.124 [2024-12-09 05:14:35.492272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.124 [2024-12-09 05:14:35.492293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.124 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.124 [2024-12-09 05:14:35.501458] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.124 [2024-12-09 05:14:35.501479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.124 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.124 [2024-12-09 05:14:35.510549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.124 [2024-12-09 05:14:35.510644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.124 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.124 [2024-12-09 05:14:35.519649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.124 [2024-12-09 05:14:35.519669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.124 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.124 [2024-12-09 05:14:35.528884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.124 [2024-12-09 05:14:35.528905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.124 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.124 [2024-12-09 05:14:35.538402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.124 [2024-12-09 05:14:35.538485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.124 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.124 [2024-12-09 05:14:35.547603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.124 [2024-12-09 05:14:35.547623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.124 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.124 [2024-12-09 05:14:35.556626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.124 [2024-12-09 05:14:35.556645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.124 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.124 [2024-12-09 05:14:35.566326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.124 [2024-12-09 05:14:35.566409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.124 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.124 [2024-12-09 05:14:35.576052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.124 [2024-12-09 05:14:35.576073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.124 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.124 [2024-12-09 05:14:35.585213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.124 [2024-12-09 05:14:35.585292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.124 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.124 [2024-12-09 05:14:35.594277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.124 [2024-12-09 05:14:35.594298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.124 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.124 [2024-12-09 05:14:35.603897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.124 [2024-12-09 05:14:35.603976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.124 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.124 [2024-12-09 05:14:35.613074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.124 [2024-12-09 05:14:35.613095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.124 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.124 [2024-12-09 05:14:35.622156] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.124 [2024-12-09 05:14:35.622176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.124 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.124 [2024-12-09 05:14:35.631876] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.124 [2024-12-09 05:14:35.631895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.124 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.125 [2024-12-09 05:14:35.640934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.125 [2024-12-09 05:14:35.640955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.125 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.125 [2024-12-09 05:14:35.649419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.125 [2024-12-09 05:14:35.649439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.125 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.125 [2024-12-09 05:14:35.659047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.125 [2024-12-09 05:14:35.659132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.125 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.125 [2024-12-09 05:14:35.668151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.125 [2024-12-09 05:14:35.668172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.125 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.125 [2024-12-09 05:14:35.677697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.125 [2024-12-09 05:14:35.677775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.125 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.125 [2024-12-09 05:14:35.686909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.125 [2024-12-09 05:14:35.686929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.125 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.125 [2024-12-09 05:14:35.696208] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.125 [2024-12-09 05:14:35.696229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.125 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.125 [2024-12-09 05:14:35.705307] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.125 [2024-12-09 05:14:35.705391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.125 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.125 [2024-12-09 05:14:35.714551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.125 [2024-12-09 05:14:35.714574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.125 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.125 [2024-12-09 05:14:35.724067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.125 [2024-12-09 05:14:35.724146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.125 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.125 [2024-12-09 05:14:35.733065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.125 [2024-12-09 05:14:35.733086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.125 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.383 [2024-12-09 05:14:35.742048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.383 [2024-12-09 05:14:35.742069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.383 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.756086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.756108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.769459] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.769480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.777980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.778002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.787088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.787175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.795516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.795538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.804397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.804418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.813747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.813828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.822814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.822834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.832603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.832623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.841831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.841851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.851007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.851027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.859333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.859352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.873498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.873519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.882754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.882776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.896143] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.896225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.904632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.904652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.913783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.913804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.922825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.922906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.932340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.932362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.941656] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.941676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.950672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.950694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.960472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.960492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.969756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.384 [2024-12-09 05:14:35.969776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.384 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.384 [2024-12-09 05:14:35.978764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.385 [2024-12-09 05:14:35.978784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.385 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.385 [2024-12-09 05:14:35.987934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.385 [2024-12-09 05:14:35.987955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.385 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.385 [2024-12-09 05:14:35.996968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.385 [2024-12-09 05:14:35.997050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.645 2024/12/09 05:14:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.645 [2024-12-09 05:14:36.006696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.645 [2024-12-09 05:14:36.006715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.645 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.645 [2024-12-09 05:14:36.015691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.645 [2024-12-09 05:14:36.015771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.645 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.645 [2024-12-09 05:14:36.025144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.645 [2024-12-09 05:14:36.025165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.645 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.645 [2024-12-09 05:14:36.034996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.645 [2024-12-09 05:14:36.035078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.645 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.645 [2024-12-09 05:14:36.044070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.645 [2024-12-09 05:14:36.044092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.645 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.645 [2024-12-09 05:14:36.053232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.645 [2024-12-09 05:14:36.053253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.645 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.645 [2024-12-09 05:14:36.062272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.645 [2024-12-09 05:14:36.062354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.645 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.645 [2024-12-09 05:14:36.071369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.645 [2024-12-09 05:14:36.071391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.645 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.645 [2024-12-09 05:14:36.080447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.645 [2024-12-09 05:14:36.080468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.645 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.645 [2024-12-09 05:14:36.089750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.645 [2024-12-09 05:14:36.089770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.645 17815.60 IOPS, 139.18 MiB/s [2024-12-09T05:14:36.262Z] 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.645 01:19:44.645 Latency(us) 01:19:44.645 [2024-12-09T05:14:36.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:19:44.645 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 01:19:44.645 Nvme1n1 : 5.01 17820.60 139.22 0.00 0.00 7177.02 2974.33 16938.54 01:19:44.645 [2024-12-09T05:14:36.262Z] =================================================================================================================== 01:19:44.645 [2024-12-09T05:14:36.262Z] Total : 17820.60 139.22 0.00 0.00 7177.02 2974.33 16938.54 01:19:44.645 [2024-12-09 05:14:36.098171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.645 [2024-12-09 05:14:36.098191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.645 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.645 [2024-12-09 05:14:36.104161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.645 [2024-12-09 05:14:36.104178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.645 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.645 [2024-12-09 05:14:36.112165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.645 [2024-12-09 05:14:36.112182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.645 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.645 [2024-12-09 05:14:36.120168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.645 [2024-12-09 05:14:36.120182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.645 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.645 [2024-12-09 05:14:36.128172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.645 [2024-12-09 05:14:36.128188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.646 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.646 [2024-12-09 05:14:36.136173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.646 [2024-12-09 05:14:36.136190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.646 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.646 [2024-12-09 05:14:36.144173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.646 [2024-12-09 05:14:36.144188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.646 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.646 [2024-12-09 05:14:36.152175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.646 [2024-12-09 05:14:36.152189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.646 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.646 [2024-12-09 05:14:36.160177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.646 [2024-12-09 05:14:36.160191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.646 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.646 [2024-12-09 05:14:36.168177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.646 [2024-12-09 05:14:36.168189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.646 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.646 [2024-12-09 05:14:36.176180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.646 [2024-12-09 05:14:36.176194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.646 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.646 [2024-12-09 05:14:36.184182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.646 [2024-12-09 05:14:36.184196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.646 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.646 [2024-12-09 05:14:36.192183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.646 [2024-12-09 05:14:36.192196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.646 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.646 [2024-12-09 05:14:36.200184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.646 [2024-12-09 05:14:36.200197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.646 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.646 [2024-12-09 05:14:36.208186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.646 [2024-12-09 05:14:36.208198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.646 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.646 [2024-12-09 05:14:36.216188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.646 [2024-12-09 05:14:36.216201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.646 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.646 [2024-12-09 05:14:36.224189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.646 [2024-12-09 05:14:36.224202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.646 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.646 [2024-12-09 05:14:36.232193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:44.646 [2024-12-09 05:14:36.232205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:44.646 2024/12/09 05:14:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:44.646 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (68564) - No such process 01:19:44.646 05:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 68564 01:19:44.646 05:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:19:44.646 05:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:44.646 05:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:19:44.646 05:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:44.646 05:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:19:44.646 05:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:44.646 05:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:19:44.646 delay0 01:19:44.646 05:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:44.646 05:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 01:19:44.646 05:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:44.646 05:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:19:44.909 05:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:44.909 05:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 01:19:44.909 [2024-12-09 05:14:36.423283] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 01:19:51.538 Initializing NVMe Controllers 01:19:51.538 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:19:51.538 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:19:51.538 Initialization complete. Launching workers. 01:19:51.538 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 68 01:19:51.538 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 355, failed to submit 33 01:19:51.538 success 172, unsuccessful 183, failed 0 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:19:51.538 rmmod nvme_tcp 01:19:51.538 rmmod nvme_fabrics 01:19:51.538 rmmod nvme_keyring 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 68390 ']' 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 68390 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 68390 ']' 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 68390 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68390 01:19:51.538 killing process with pid 68390 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68390' 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 68390 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 68390 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 01:19:51.538 01:19:51.538 real 0m24.244s 01:19:51.538 user 0m41.255s 01:19:51.538 sys 0m4.678s 01:19:51.538 ************************************ 01:19:51.538 END TEST nvmf_zcopy 01:19:51.538 ************************************ 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:51.538 05:14:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:19:51.538 05:14:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 01:19:51.538 05:14:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:19:51.539 05:14:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:51.539 05:14:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:19:51.539 ************************************ 01:19:51.539 START TEST nvmf_nmic 01:19:51.539 ************************************ 01:19:51.539 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 01:19:51.539 * Looking for test storage... 01:19:51.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:19:51.539 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:19:51.539 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 01:19:51.539 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:19:51.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:51.798 --rc genhtml_branch_coverage=1 01:19:51.798 --rc genhtml_function_coverage=1 01:19:51.798 --rc genhtml_legend=1 01:19:51.798 --rc geninfo_all_blocks=1 01:19:51.798 --rc geninfo_unexecuted_blocks=1 01:19:51.798 01:19:51.798 ' 01:19:51.798 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:19:51.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:51.798 --rc genhtml_branch_coverage=1 01:19:51.798 --rc genhtml_function_coverage=1 01:19:51.799 --rc genhtml_legend=1 01:19:51.799 --rc geninfo_all_blocks=1 01:19:51.799 --rc geninfo_unexecuted_blocks=1 01:19:51.799 01:19:51.799 ' 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:19:51.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:51.799 --rc genhtml_branch_coverage=1 01:19:51.799 --rc genhtml_function_coverage=1 01:19:51.799 --rc genhtml_legend=1 01:19:51.799 --rc geninfo_all_blocks=1 01:19:51.799 --rc geninfo_unexecuted_blocks=1 01:19:51.799 01:19:51.799 ' 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:19:51.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:51.799 --rc genhtml_branch_coverage=1 01:19:51.799 --rc genhtml_function_coverage=1 01:19:51.799 --rc genhtml_legend=1 01:19:51.799 --rc geninfo_all_blocks=1 01:19:51.799 --rc geninfo_unexecuted_blocks=1 01:19:51.799 01:19:51.799 ' 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:19:51.799 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:19:51.799 Cannot find device "nvmf_init_br" 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:19:51.799 Cannot find device "nvmf_init_br2" 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 01:19:51.799 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:19:51.800 Cannot find device "nvmf_tgt_br" 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:19:51.800 Cannot find device "nvmf_tgt_br2" 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:19:51.800 Cannot find device "nvmf_init_br" 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:19:51.800 Cannot find device "nvmf_init_br2" 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:19:51.800 Cannot find device "nvmf_tgt_br" 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:19:51.800 Cannot find device "nvmf_tgt_br2" 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:19:51.800 Cannot find device "nvmf_br" 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:19:51.800 Cannot find device "nvmf_init_if" 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:19:51.800 Cannot find device "nvmf_init_if2" 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:19:51.800 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:19:51.800 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:19:51.800 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:19:52.058 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:19:52.058 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:19:52.058 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:19:52.058 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:19:52.058 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:19:52.058 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:19:52.059 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:19:52.059 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 01:19:52.059 01:19:52.059 --- 10.0.0.3 ping statistics --- 01:19:52.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:52.059 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:19:52.059 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:19:52.059 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 01:19:52.059 01:19:52.059 --- 10.0.0.4 ping statistics --- 01:19:52.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:52.059 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:19:52.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:19:52.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 01:19:52.059 01:19:52.059 --- 10.0.0.1 ping statistics --- 01:19:52.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:52.059 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:19:52.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:19:52.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 01:19:52.059 01:19:52.059 --- 10.0.0.2 ping statistics --- 01:19:52.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:52.059 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:52.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=68934 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 68934 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 68934 ']' 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:52.059 05:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:19:52.059 [2024-12-09 05:14:43.579990] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:19:52.059 [2024-12-09 05:14:43.580146] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:19:52.318 [2024-12-09 05:14:43.716182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:19:52.318 [2024-12-09 05:14:43.752912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:19:52.318 [2024-12-09 05:14:43.752947] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:19:52.318 [2024-12-09 05:14:43.752954] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:19:52.318 [2024-12-09 05:14:43.752958] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:19:52.318 [2024-12-09 05:14:43.752963] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:19:52.318 [2024-12-09 05:14:43.753625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:19:52.318 [2024-12-09 05:14:43.753834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:19:52.318 [2024-12-09 05:14:43.755025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:19:52.318 [2024-12-09 05:14:43.755104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:52.902 [2024-12-09 05:14:44.459427] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:52.902 Malloc0 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:52.902 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:53.160 [2024-12-09 05:14:44.517553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:19:53.160 test case1: single bdev can't be used in multiple subsystems 01:19:53.160 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:53.160 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 01:19:53.160 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 01:19:53.160 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:53.160 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:53.161 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:53.161 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 01:19:53.161 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:53.161 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:53.161 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:53.161 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 01:19:53.161 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 01:19:53.161 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:53.161 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:53.161 [2024-12-09 05:14:44.541461] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 01:19:53.161 [2024-12-09 05:14:44.541487] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 01:19:53.161 [2024-12-09 05:14:44.541494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:53.161 2024/12/09 05:14:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:19:53.161 request: 01:19:53.161 { 01:19:53.161 "method": "nvmf_subsystem_add_ns", 01:19:53.161 "params": { 01:19:53.161 "nqn": "nqn.2016-06.io.spdk:cnode2", 01:19:53.161 "namespace": { 01:19:53.161 "bdev_name": "Malloc0", 01:19:53.161 "no_auto_visible": false, 01:19:53.161 "hide_metadata": false 01:19:53.161 } 01:19:53.161 } 01:19:53.161 } 01:19:53.161 Got JSON-RPC error response 01:19:53.161 GoRPCClient: error on JSON-RPC call 01:19:53.161 Adding namespace failed - expected result. 01:19:53.161 test case2: host connect to nvmf target in multiple paths 01:19:53.161 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:19:53.161 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 01:19:53.161 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 01:19:53.161 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 01:19:53.161 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 01:19:53.161 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:19:53.161 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:53.161 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:53.161 [2024-12-09 05:14:44.553531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:19:53.161 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:53.161 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:19:53.161 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 01:19:53.436 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 01:19:53.436 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 01:19:53.436 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:19:53.436 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:19:53.436 05:14:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 01:19:55.380 05:14:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:19:55.380 05:14:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:19:55.380 05:14:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:19:55.380 05:14:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:19:55.380 05:14:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:19:55.380 05:14:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 01:19:55.380 05:14:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 01:19:55.380 [global] 01:19:55.380 thread=1 01:19:55.380 invalidate=1 01:19:55.380 rw=write 01:19:55.380 time_based=1 01:19:55.380 runtime=1 01:19:55.380 ioengine=libaio 01:19:55.380 direct=1 01:19:55.380 bs=4096 01:19:55.380 iodepth=1 01:19:55.380 norandommap=0 01:19:55.380 numjobs=1 01:19:55.380 01:19:55.380 verify_dump=1 01:19:55.380 verify_backlog=512 01:19:55.380 verify_state_save=0 01:19:55.380 do_verify=1 01:19:55.380 verify=crc32c-intel 01:19:55.380 [job0] 01:19:55.380 filename=/dev/nvme0n1 01:19:55.380 Could not set queue depth (nvme0n1) 01:19:55.640 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:19:55.640 fio-3.35 01:19:55.640 Starting 1 thread 01:19:56.581 01:19:56.581 job0: (groupid=0, jobs=1): err= 0: pid=69044: Mon Dec 9 05:14:48 2024 01:19:56.581 read: IOPS=5698, BW=22.3MiB/s (23.3MB/s)(22.3MiB/1001msec) 01:19:56.581 slat (nsec): min=5871, max=65362, avg=6763.59, stdev=2404.11 01:19:56.581 clat (usec): min=69, max=3107, avg=83.14, stdev=44.56 01:19:56.581 lat (usec): min=75, max=3115, avg=89.90, stdev=44.72 01:19:56.581 clat percentiles (usec): 01:19:56.581 | 1.00th=[ 73], 5.00th=[ 75], 10.00th=[ 76], 20.00th=[ 78], 01:19:56.581 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 82], 60.00th=[ 83], 01:19:56.581 | 70.00th=[ 85], 80.00th=[ 87], 90.00th=[ 90], 95.00th=[ 93], 01:19:56.581 | 99.00th=[ 101], 99.50th=[ 105], 99.90th=[ 229], 99.95th=[ 412], 01:19:56.581 | 99.99th=[ 3097] 01:19:56.581 write: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec); 0 zone resets 01:19:56.581 slat (usec): min=8, max=133, avg=11.02, stdev= 4.75 01:19:56.581 clat (usec): min=51, max=3558, avg=66.62, stdev=81.98 01:19:56.581 lat (usec): min=60, max=3570, avg=77.65, stdev=82.40 01:19:56.581 clat percentiles (usec): 01:19:56.581 | 1.00th=[ 56], 5.00th=[ 57], 10.00th=[ 58], 20.00th=[ 60], 01:19:56.581 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 64], 60.00th=[ 65], 01:19:56.581 | 70.00th=[ 67], 80.00th=[ 68], 90.00th=[ 71], 95.00th=[ 74], 01:19:56.581 | 99.00th=[ 83], 99.50th=[ 89], 99.90th=[ 906], 99.95th=[ 2180], 01:19:56.581 | 99.99th=[ 3556] 01:19:56.581 bw ( KiB/s): min=24576, max=24576, per=100.00%, avg=24576.00, stdev= 0.00, samples=1 01:19:56.581 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 01:19:56.581 lat (usec) : 100=99.32%, 250=0.55%, 500=0.06%, 1000=0.01% 01:19:56.581 lat (msec) : 2=0.03%, 4=0.04% 01:19:56.581 cpu : usr=2.30%, sys=8.10%, ctx=11849, majf=0, minf=5 01:19:56.581 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:19:56.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:56.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:56.581 issued rwts: total=5704,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:56.581 latency : target=0, window=0, percentile=100.00%, depth=1 01:19:56.581 01:19:56.581 Run status group 0 (all jobs): 01:19:56.581 READ: bw=22.3MiB/s (23.3MB/s), 22.3MiB/s-22.3MiB/s (23.3MB/s-23.3MB/s), io=22.3MiB (23.4MB), run=1001-1001msec 01:19:56.581 WRITE: bw=24.0MiB/s (25.1MB/s), 24.0MiB/s-24.0MiB/s (25.1MB/s-25.1MB/s), io=24.0MiB (25.2MB), run=1001-1001msec 01:19:56.581 01:19:56.581 Disk stats (read/write): 01:19:56.581 nvme0n1: ios=5170/5541, merge=0/0, ticks=450/380, in_queue=830, util=90.68% 01:19:56.581 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:19:56.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:19:56.841 rmmod nvme_tcp 01:19:56.841 rmmod nvme_fabrics 01:19:56.841 rmmod nvme_keyring 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 68934 ']' 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 68934 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 68934 ']' 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 68934 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68934 01:19:56.841 killing process with pid 68934 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68934' 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 68934 01:19:56.841 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 68934 01:19:57.101 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:19:57.101 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:19:57.101 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:19:57.101 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 01:19:57.102 01:19:57.102 real 0m5.663s 01:19:57.102 user 0m18.752s 01:19:57.102 sys 0m1.063s 01:19:57.102 ************************************ 01:19:57.102 END TEST nvmf_nmic 01:19:57.102 ************************************ 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:57.102 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:57.414 05:14:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 01:19:57.414 05:14:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:19:57.414 05:14:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:57.414 05:14:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:19:57.414 ************************************ 01:19:57.414 START TEST nvmf_fio_target 01:19:57.414 ************************************ 01:19:57.414 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 01:19:57.414 * Looking for test storage... 01:19:57.414 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:19:57.414 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:19:57.414 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:19:57.414 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 01:19:57.414 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:19:57.414 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:19:57.414 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 01:19:57.414 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 01:19:57.414 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 01:19:57.414 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 01:19:57.414 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 01:19:57.414 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 01:19:57.414 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 01:19:57.414 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 01:19:57.414 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:19:57.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:57.415 --rc genhtml_branch_coverage=1 01:19:57.415 --rc genhtml_function_coverage=1 01:19:57.415 --rc genhtml_legend=1 01:19:57.415 --rc geninfo_all_blocks=1 01:19:57.415 --rc geninfo_unexecuted_blocks=1 01:19:57.415 01:19:57.415 ' 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:19:57.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:57.415 --rc genhtml_branch_coverage=1 01:19:57.415 --rc genhtml_function_coverage=1 01:19:57.415 --rc genhtml_legend=1 01:19:57.415 --rc geninfo_all_blocks=1 01:19:57.415 --rc geninfo_unexecuted_blocks=1 01:19:57.415 01:19:57.415 ' 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:19:57.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:57.415 --rc genhtml_branch_coverage=1 01:19:57.415 --rc genhtml_function_coverage=1 01:19:57.415 --rc genhtml_legend=1 01:19:57.415 --rc geninfo_all_blocks=1 01:19:57.415 --rc geninfo_unexecuted_blocks=1 01:19:57.415 01:19:57.415 ' 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:19:57.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:57.415 --rc genhtml_branch_coverage=1 01:19:57.415 --rc genhtml_function_coverage=1 01:19:57.415 --rc genhtml_legend=1 01:19:57.415 --rc geninfo_all_blocks=1 01:19:57.415 --rc geninfo_unexecuted_blocks=1 01:19:57.415 01:19:57.415 ' 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:19:57.415 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 01:19:57.415 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:19:57.416 Cannot find device "nvmf_init_br" 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:19:57.416 Cannot find device "nvmf_init_br2" 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:19:57.416 Cannot find device "nvmf_tgt_br" 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:19:57.416 Cannot find device "nvmf_tgt_br2" 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:19:57.416 Cannot find device "nvmf_init_br" 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:19:57.416 Cannot find device "nvmf_init_br2" 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:19:57.416 Cannot find device "nvmf_tgt_br" 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:19:57.416 Cannot find device "nvmf_tgt_br2" 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 01:19:57.416 05:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:19:57.416 Cannot find device "nvmf_br" 01:19:57.416 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 01:19:57.416 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:19:57.416 Cannot find device "nvmf_init_if" 01:19:57.416 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 01:19:57.416 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:19:57.416 Cannot find device "nvmf_init_if2" 01:19:57.416 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 01:19:57.416 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:19:57.416 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:57.416 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 01:19:57.416 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:19:57.678 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:19:57.678 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:19:57.678 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 01:19:57.678 01:19:57.678 --- 10.0.0.3 ping statistics --- 01:19:57.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:57.678 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:19:57.678 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:19:57.678 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.095 ms 01:19:57.678 01:19:57.678 --- 10.0.0.4 ping statistics --- 01:19:57.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:57.678 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:19:57.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:19:57.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 01:19:57.678 01:19:57.678 --- 10.0.0.1 ping statistics --- 01:19:57.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:57.678 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 01:19:57.678 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:19:57.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:19:57.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 01:19:57.678 01:19:57.678 --- 10.0.0.2 ping statistics --- 01:19:57.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:57.679 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 01:19:57.679 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:19:57.679 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 01:19:57.679 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:19:57.679 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:19:57.679 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:19:57.679 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:19:57.679 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:19:57.679 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:19:57.679 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:19:57.679 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 01:19:57.679 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:19:57.679 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 01:19:57.679 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:19:57.679 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=69273 01:19:57.679 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 69273 01:19:57.679 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 69273 ']' 01:19:57.679 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:19:57.679 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:19:57.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:19:57.679 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:57.679 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:19:57.679 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:57.679 05:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:19:57.679 [2024-12-09 05:14:49.250639] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:19:57.679 [2024-12-09 05:14:49.250698] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:19:57.940 [2024-12-09 05:14:49.390896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:19:57.940 [2024-12-09 05:14:49.444257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:19:57.940 [2024-12-09 05:14:49.444311] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:19:57.940 [2024-12-09 05:14:49.444318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:19:57.940 [2024-12-09 05:14:49.444323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:19:57.940 [2024-12-09 05:14:49.444329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:19:57.940 [2024-12-09 05:14:49.445454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:19:57.940 [2024-12-09 05:14:49.445763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:19:57.940 [2024-12-09 05:14:49.447236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:19:57.940 [2024-12-09 05:14:49.447331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:58.884 05:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:58.884 05:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 01:19:58.884 05:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:19:58.884 05:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 01:19:58.884 05:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:19:58.884 05:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:19:58.884 05:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:19:58.884 [2024-12-09 05:14:50.378643] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:19:58.884 05:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:19:59.144 05:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 01:19:59.144 05:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:19:59.405 05:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 01:19:59.405 05:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:19:59.665 05:14:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 01:19:59.665 05:14:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:19:59.665 05:14:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 01:19:59.665 05:14:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 01:19:59.926 05:14:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:20:00.187 05:14:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 01:20:00.187 05:14:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:20:00.449 05:14:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 01:20:00.449 05:14:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:20:00.710 05:14:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 01:20:00.710 05:14:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 01:20:00.710 05:14:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:20:00.971 05:14:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 01:20:00.971 05:14:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:20:01.231 05:14:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 01:20:01.231 05:14:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:20:01.231 05:14:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:20:01.492 [2024-12-09 05:14:52.971244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:20:01.492 05:14:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 01:20:01.752 05:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 01:20:02.014 05:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:20:02.014 05:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 01:20:02.014 05:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 01:20:02.014 05:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:20:02.014 05:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 01:20:02.015 05:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 01:20:02.015 05:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 01:20:03.959 05:14:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:20:03.959 05:14:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:20:03.959 05:14:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:20:04.219 05:14:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 01:20:04.219 05:14:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:20:04.220 05:14:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 01:20:04.220 05:14:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 01:20:04.220 [global] 01:20:04.220 thread=1 01:20:04.220 invalidate=1 01:20:04.220 rw=write 01:20:04.220 time_based=1 01:20:04.220 runtime=1 01:20:04.220 ioengine=libaio 01:20:04.220 direct=1 01:20:04.220 bs=4096 01:20:04.220 iodepth=1 01:20:04.220 norandommap=0 01:20:04.220 numjobs=1 01:20:04.220 01:20:04.220 verify_dump=1 01:20:04.220 verify_backlog=512 01:20:04.220 verify_state_save=0 01:20:04.220 do_verify=1 01:20:04.220 verify=crc32c-intel 01:20:04.220 [job0] 01:20:04.220 filename=/dev/nvme0n1 01:20:04.220 [job1] 01:20:04.220 filename=/dev/nvme0n2 01:20:04.220 [job2] 01:20:04.220 filename=/dev/nvme0n3 01:20:04.220 [job3] 01:20:04.220 filename=/dev/nvme0n4 01:20:04.220 Could not set queue depth (nvme0n1) 01:20:04.220 Could not set queue depth (nvme0n2) 01:20:04.220 Could not set queue depth (nvme0n3) 01:20:04.220 Could not set queue depth (nvme0n4) 01:20:04.220 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:20:04.220 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:20:04.220 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:20:04.220 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:20:04.220 fio-3.35 01:20:04.220 Starting 4 threads 01:20:05.600 01:20:05.600 job0: (groupid=0, jobs=1): err= 0: pid=69555: Mon Dec 9 05:14:56 2024 01:20:05.600 read: IOPS=5684, BW=22.2MiB/s (23.3MB/s)(22.2MiB/1001msec) 01:20:05.600 slat (nsec): min=4444, max=16383, avg=4866.40, stdev=659.19 01:20:05.600 clat (usec): min=70, max=166, avg=86.26, stdev=10.21 01:20:05.600 lat (usec): min=75, max=171, avg=91.12, stdev=10.24 01:20:05.600 clat percentiles (usec): 01:20:05.600 | 1.00th=[ 75], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 80], 01:20:05.600 | 30.00th=[ 82], 40.00th=[ 83], 50.00th=[ 84], 60.00th=[ 86], 01:20:05.600 | 70.00th=[ 88], 80.00th=[ 90], 90.00th=[ 96], 95.00th=[ 104], 01:20:05.600 | 99.00th=[ 133], 99.50th=[ 145], 99.90th=[ 155], 99.95th=[ 161], 01:20:05.600 | 99.99th=[ 167] 01:20:05.600 write: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec); 0 zone resets 01:20:05.600 slat (nsec): min=7039, max=76460, avg=8003.89, stdev=2366.48 01:20:05.600 clat (usec): min=53, max=316, avg=69.24, stdev=10.81 01:20:05.600 lat (usec): min=60, max=327, avg=77.25, stdev=11.22 01:20:05.600 clat percentiles (usec): 01:20:05.600 | 1.00th=[ 58], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 63], 01:20:05.600 | 30.00th=[ 65], 40.00th=[ 66], 50.00th=[ 68], 60.00th=[ 69], 01:20:05.600 | 70.00th=[ 71], 80.00th=[ 73], 90.00th=[ 79], 95.00th=[ 87], 01:20:05.600 | 99.00th=[ 116], 99.50th=[ 126], 99.90th=[ 143], 99.95th=[ 147], 01:20:05.600 | 99.99th=[ 318] 01:20:05.600 bw ( KiB/s): min=24576, max=24576, per=36.31%, avg=24576.00, stdev= 0.00, samples=1 01:20:05.600 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 01:20:05.600 lat (usec) : 100=95.71%, 250=4.28%, 500=0.02% 01:20:05.600 cpu : usr=1.10%, sys=6.30%, ctx=11834, majf=0, minf=9 01:20:05.600 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:05.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:05.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:05.600 issued rwts: total=5690,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:05.600 latency : target=0, window=0, percentile=100.00%, depth=1 01:20:05.600 job1: (groupid=0, jobs=1): err= 0: pid=69556: Mon Dec 9 05:14:56 2024 01:20:05.600 read: IOPS=5585, BW=21.8MiB/s (22.9MB/s)(21.8MiB/1001msec) 01:20:05.600 slat (usec): min=4, max=164, avg= 7.58, stdev= 5.38 01:20:05.600 clat (usec): min=68, max=1130, avg=87.36, stdev=18.64 01:20:05.600 lat (usec): min=74, max=1135, avg=94.94, stdev=19.74 01:20:05.600 clat percentiles (usec): 01:20:05.600 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 81], 01:20:05.600 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 87], 01:20:05.600 | 70.00th=[ 89], 80.00th=[ 92], 90.00th=[ 97], 95.00th=[ 108], 01:20:05.600 | 99.00th=[ 137], 99.50th=[ 149], 99.90th=[ 247], 99.95th=[ 293], 01:20:05.600 | 99.99th=[ 1123] 01:20:05.600 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 01:20:05.600 slat (nsec): min=7096, max=95465, avg=10916.57, stdev=5451.53 01:20:05.600 clat (usec): min=53, max=312, avg=70.50, stdev=11.80 01:20:05.600 lat (usec): min=60, max=322, avg=81.42, stdev=14.05 01:20:05.600 clat percentiles (usec): 01:20:05.600 | 1.00th=[ 58], 5.00th=[ 60], 10.00th=[ 62], 20.00th=[ 64], 01:20:05.600 | 30.00th=[ 65], 40.00th=[ 67], 50.00th=[ 69], 60.00th=[ 71], 01:20:05.600 | 70.00th=[ 73], 80.00th=[ 76], 90.00th=[ 82], 95.00th=[ 90], 01:20:05.600 | 99.00th=[ 118], 99.50th=[ 129], 99.90th=[ 157], 99.95th=[ 215], 01:20:05.600 | 99.99th=[ 314] 01:20:05.600 bw ( KiB/s): min=24576, max=24576, per=36.31%, avg=24576.00, stdev= 0.00, samples=1 01:20:05.600 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 01:20:05.600 lat (usec) : 100=94.76%, 250=5.18%, 500=0.05% 01:20:05.600 lat (msec) : 2=0.01% 01:20:05.600 cpu : usr=1.60%, sys=8.70%, ctx=11233, majf=0, minf=5 01:20:05.600 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:05.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:05.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:05.600 issued rwts: total=5591,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:05.600 latency : target=0, window=0, percentile=100.00%, depth=1 01:20:05.600 job2: (groupid=0, jobs=1): err= 0: pid=69557: Mon Dec 9 05:14:56 2024 01:20:05.600 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 01:20:05.600 slat (nsec): min=6080, max=26008, avg=6805.64, stdev=1671.28 01:20:05.600 clat (usec): min=93, max=1308, avg=206.64, stdev=49.47 01:20:05.600 lat (usec): min=101, max=1317, avg=213.45, stdev=49.75 01:20:05.600 clat percentiles (usec): 01:20:05.600 | 1.00th=[ 125], 5.00th=[ 157], 10.00th=[ 182], 20.00th=[ 194], 01:20:05.600 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 208], 01:20:05.600 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 225], 95.00th=[ 233], 01:20:05.600 | 99.00th=[ 330], 99.50th=[ 635], 99.90th=[ 750], 99.95th=[ 840], 01:20:05.600 | 99.99th=[ 1303] 01:20:05.600 write: IOPS=2601, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec); 0 zone resets 01:20:05.600 slat (nsec): min=9627, max=91145, avg=11617.85, stdev=5791.64 01:20:05.600 clat (usec): min=67, max=395, avg=160.88, stdev=27.17 01:20:05.600 lat (usec): min=79, max=405, avg=172.49, stdev=27.09 01:20:05.600 clat percentiles (usec): 01:20:05.600 | 1.00th=[ 87], 5.00th=[ 95], 10.00th=[ 109], 20.00th=[ 155], 01:20:05.600 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 01:20:05.600 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 190], 01:20:05.600 | 99.00th=[ 202], 99.50th=[ 217], 99.90th=[ 314], 99.95th=[ 347], 01:20:05.600 | 99.99th=[ 396] 01:20:05.600 bw ( KiB/s): min=12136, max=12136, per=17.93%, avg=12136.00, stdev= 0.00, samples=1 01:20:05.600 iops : min= 3034, max= 3034, avg=3034.00, stdev= 0.00, samples=1 01:20:05.600 lat (usec) : 100=3.99%, 250=94.85%, 500=0.85%, 750=0.25%, 1000=0.04% 01:20:05.600 lat (msec) : 2=0.02% 01:20:05.600 cpu : usr=1.10%, sys=3.50%, ctx=5164, majf=0, minf=13 01:20:05.601 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:05.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:05.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:05.601 issued rwts: total=2560,2604,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:05.601 latency : target=0, window=0, percentile=100.00%, depth=1 01:20:05.601 job3: (groupid=0, jobs=1): err= 0: pid=69558: Mon Dec 9 05:14:56 2024 01:20:05.601 read: IOPS=2279, BW=9119KiB/s (9338kB/s)(9128KiB/1001msec) 01:20:05.601 slat (nsec): min=6100, max=60698, avg=7575.12, stdev=3115.86 01:20:05.601 clat (usec): min=114, max=1478, avg=221.77, stdev=60.70 01:20:05.601 lat (usec): min=122, max=1485, avg=229.35, stdev=60.95 01:20:05.601 clat percentiles (usec): 01:20:05.601 | 1.00th=[ 149], 5.00th=[ 167], 10.00th=[ 194], 20.00th=[ 200], 01:20:05.601 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 01:20:05.601 | 70.00th=[ 219], 80.00th=[ 229], 90.00th=[ 281], 95.00th=[ 297], 01:20:05.601 | 99.00th=[ 424], 99.50th=[ 652], 99.90th=[ 807], 99.95th=[ 930], 01:20:05.601 | 99.99th=[ 1483] 01:20:05.601 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 01:20:05.601 slat (nsec): min=9571, max=92383, avg=11253.05, stdev=4939.76 01:20:05.601 clat (usec): min=89, max=406, avg=173.34, stdev=19.30 01:20:05.601 lat (usec): min=99, max=416, avg=184.59, stdev=19.32 01:20:05.601 clat percentiles (usec): 01:20:05.601 | 1.00th=[ 141], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 01:20:05.601 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 174], 01:20:05.601 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 202], 01:20:05.601 | 99.00th=[ 219], 99.50th=[ 297], 99.90th=[ 363], 99.95th=[ 392], 01:20:05.601 | 99.99th=[ 408] 01:20:05.601 bw ( KiB/s): min=11616, max=11616, per=17.16%, avg=11616.00, stdev= 0.00, samples=1 01:20:05.601 iops : min= 2904, max= 2904, avg=2904.00, stdev= 0.00, samples=1 01:20:05.601 lat (usec) : 100=0.10%, 250=92.79%, 500=6.71%, 750=0.29%, 1000=0.08% 01:20:05.601 lat (msec) : 2=0.02% 01:20:05.601 cpu : usr=0.90%, sys=3.50%, ctx=4842, majf=0, minf=20 01:20:05.601 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:05.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:05.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:05.601 issued rwts: total=2282,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:05.601 latency : target=0, window=0, percentile=100.00%, depth=1 01:20:05.601 01:20:05.601 Run status group 0 (all jobs): 01:20:05.601 READ: bw=62.9MiB/s (66.0MB/s), 9119KiB/s-22.2MiB/s (9338kB/s-23.3MB/s), io=63.0MiB (66.0MB), run=1001-1001msec 01:20:05.601 WRITE: bw=66.1MiB/s (69.3MB/s), 9.99MiB/s-24.0MiB/s (10.5MB/s-25.1MB/s), io=66.2MiB (69.4MB), run=1001-1001msec 01:20:05.601 01:20:05.601 Disk stats (read/write): 01:20:05.601 nvme0n1: ios=5170/5191, merge=0/0, ticks=452/362, in_queue=814, util=88.88% 01:20:05.601 nvme0n2: ios=4835/5120, merge=0/0, ticks=450/374, in_queue=824, util=89.83% 01:20:05.601 nvme0n3: ios=2054/2456, merge=0/0, ticks=428/400, in_queue=828, util=89.67% 01:20:05.601 nvme0n4: ios=2048/2161, merge=0/0, ticks=467/379, in_queue=846, util=89.95% 01:20:05.601 05:14:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 01:20:05.601 [global] 01:20:05.601 thread=1 01:20:05.601 invalidate=1 01:20:05.601 rw=randwrite 01:20:05.601 time_based=1 01:20:05.601 runtime=1 01:20:05.601 ioengine=libaio 01:20:05.601 direct=1 01:20:05.601 bs=4096 01:20:05.601 iodepth=1 01:20:05.601 norandommap=0 01:20:05.601 numjobs=1 01:20:05.601 01:20:05.601 verify_dump=1 01:20:05.601 verify_backlog=512 01:20:05.601 verify_state_save=0 01:20:05.601 do_verify=1 01:20:05.601 verify=crc32c-intel 01:20:05.601 [job0] 01:20:05.601 filename=/dev/nvme0n1 01:20:05.601 [job1] 01:20:05.601 filename=/dev/nvme0n2 01:20:05.601 [job2] 01:20:05.601 filename=/dev/nvme0n3 01:20:05.601 [job3] 01:20:05.601 filename=/dev/nvme0n4 01:20:05.601 Could not set queue depth (nvme0n1) 01:20:05.601 Could not set queue depth (nvme0n2) 01:20:05.601 Could not set queue depth (nvme0n3) 01:20:05.601 Could not set queue depth (nvme0n4) 01:20:05.601 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:20:05.601 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:20:05.601 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:20:05.601 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:20:05.601 fio-3.35 01:20:05.601 Starting 4 threads 01:20:06.977 01:20:06.977 job0: (groupid=0, jobs=1): err= 0: pid=69623: Mon Dec 9 05:14:58 2024 01:20:06.977 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 01:20:06.977 slat (nsec): min=5773, max=67454, avg=7755.74, stdev=4418.38 01:20:06.977 clat (usec): min=74, max=3745, avg=97.89, stdev=98.09 01:20:06.977 lat (usec): min=81, max=3751, avg=105.64, stdev=98.46 01:20:06.977 clat percentiles (usec): 01:20:06.977 | 1.00th=[ 82], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 88], 01:20:06.977 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 92], 60.00th=[ 94], 01:20:06.977 | 70.00th=[ 96], 80.00th=[ 99], 90.00th=[ 106], 95.00th=[ 115], 01:20:06.977 | 99.00th=[ 139], 99.50th=[ 153], 99.90th=[ 2638], 99.95th=[ 2769], 01:20:06.977 | 99.99th=[ 3752] 01:20:06.977 write: IOPS=5152, BW=20.1MiB/s (21.1MB/s)(20.1MiB/1001msec); 0 zone resets 01:20:06.977 slat (usec): min=9, max=105, avg=11.16, stdev= 5.56 01:20:06.977 clat (usec): min=58, max=401, avg=75.94, stdev=14.19 01:20:06.977 lat (usec): min=67, max=416, avg=87.10, stdev=16.47 01:20:06.977 clat percentiles (usec): 01:20:06.977 | 1.00th=[ 63], 5.00th=[ 66], 10.00th=[ 67], 20.00th=[ 69], 01:20:06.977 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 75], 01:20:06.977 | 70.00th=[ 78], 80.00th=[ 81], 90.00th=[ 89], 95.00th=[ 98], 01:20:06.977 | 99.00th=[ 117], 99.50th=[ 127], 99.90th=[ 235], 99.95th=[ 343], 01:20:06.977 | 99.99th=[ 404] 01:20:06.977 bw ( KiB/s): min=21480, max=21480, per=30.86%, avg=21480.00, stdev= 0.00, samples=1 01:20:06.977 iops : min= 5370, max= 5370, avg=5370.00, stdev= 0.00, samples=1 01:20:06.977 lat (usec) : 100=89.28%, 250=10.56%, 500=0.10% 01:20:06.977 lat (msec) : 2=0.01%, 4=0.06% 01:20:06.977 cpu : usr=1.90%, sys=7.80%, ctx=10278, majf=0, minf=5 01:20:06.977 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:06.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:06.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:06.977 issued rwts: total=5120,5158,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:06.977 latency : target=0, window=0, percentile=100.00%, depth=1 01:20:06.977 job1: (groupid=0, jobs=1): err= 0: pid=69624: Mon Dec 9 05:14:58 2024 01:20:06.977 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 01:20:06.977 slat (nsec): min=5903, max=65697, avg=6975.10, stdev=3112.73 01:20:06.977 clat (usec): min=74, max=1477, avg=159.69, stdev=36.15 01:20:06.977 lat (usec): min=81, max=1484, avg=166.66, stdev=36.10 01:20:06.977 clat percentiles (usec): 01:20:06.977 | 1.00th=[ 116], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 143], 01:20:06.977 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 155], 01:20:06.977 | 70.00th=[ 161], 80.00th=[ 178], 90.00th=[ 194], 95.00th=[ 204], 01:20:06.977 | 99.00th=[ 229], 99.50th=[ 265], 99.90th=[ 388], 99.95th=[ 652], 01:20:06.977 | 99.99th=[ 1483] 01:20:06.977 write: IOPS=3462, BW=13.5MiB/s (14.2MB/s)(13.5MiB/1001msec); 0 zone resets 01:20:06.977 slat (nsec): min=9084, max=94525, avg=11020.09, stdev=5173.56 01:20:06.977 clat (usec): min=59, max=408, avg=128.18, stdev=23.46 01:20:06.977 lat (usec): min=69, max=418, avg=139.20, stdev=22.81 01:20:06.977 clat percentiles (usec): 01:20:06.977 | 1.00th=[ 86], 5.00th=[ 105], 10.00th=[ 110], 20.00th=[ 113], 01:20:06.977 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 121], 60.00th=[ 125], 01:20:06.977 | 70.00th=[ 135], 80.00th=[ 151], 90.00th=[ 161], 95.00th=[ 167], 01:20:06.977 | 99.00th=[ 190], 99.50th=[ 200], 99.90th=[ 355], 99.95th=[ 359], 01:20:06.977 | 99.99th=[ 408] 01:20:06.977 bw ( KiB/s): min=15488, max=15488, per=22.25%, avg=15488.00, stdev= 0.00, samples=1 01:20:06.977 iops : min= 3872, max= 3872, avg=3872.00, stdev= 0.00, samples=1 01:20:06.977 lat (usec) : 100=1.99%, 250=97.58%, 500=0.38%, 750=0.03% 01:20:06.977 lat (msec) : 2=0.02% 01:20:06.977 cpu : usr=0.90%, sys=4.90%, ctx=6538, majf=0, minf=19 01:20:06.978 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:06.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:06.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:06.978 issued rwts: total=3072,3466,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:06.978 latency : target=0, window=0, percentile=100.00%, depth=1 01:20:06.978 job2: (groupid=0, jobs=1): err= 0: pid=69625: Mon Dec 9 05:14:58 2024 01:20:06.978 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 01:20:06.978 slat (nsec): min=5766, max=54118, avg=6567.94, stdev=1506.86 01:20:06.978 clat (usec): min=75, max=387, avg=94.94, stdev=12.78 01:20:06.978 lat (usec): min=81, max=393, avg=101.51, stdev=12.92 01:20:06.978 clat percentiles (usec): 01:20:06.978 | 1.00th=[ 82], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 88], 01:20:06.978 | 30.00th=[ 90], 40.00th=[ 91], 50.00th=[ 92], 60.00th=[ 94], 01:20:06.978 | 70.00th=[ 96], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 118], 01:20:06.978 | 99.00th=[ 141], 99.50th=[ 151], 99.90th=[ 186], 99.95th=[ 297], 01:20:06.978 | 99.99th=[ 388] 01:20:06.978 write: IOPS=5317, BW=20.8MiB/s (21.8MB/s)(20.8MiB/1001msec); 0 zone resets 01:20:06.978 slat (nsec): min=9177, max=82284, avg=10369.01, stdev=2506.80 01:20:06.978 clat (usec): min=61, max=1182, avg=78.26, stdev=18.91 01:20:06.978 lat (usec): min=71, max=1192, avg=88.63, stdev=19.15 01:20:06.978 clat percentiles (usec): 01:20:06.978 | 1.00th=[ 67], 5.00th=[ 69], 10.00th=[ 70], 20.00th=[ 72], 01:20:06.978 | 30.00th=[ 73], 40.00th=[ 74], 50.00th=[ 76], 60.00th=[ 77], 01:20:06.978 | 70.00th=[ 79], 80.00th=[ 83], 90.00th=[ 91], 95.00th=[ 101], 01:20:06.978 | 99.00th=[ 122], 99.50th=[ 133], 99.90th=[ 161], 99.95th=[ 227], 01:20:06.978 | 99.99th=[ 1188] 01:20:06.978 bw ( KiB/s): min=21048, max=21048, per=30.24%, avg=21048.00, stdev= 0.00, samples=1 01:20:06.978 iops : min= 5262, max= 5262, avg=5262.00, stdev= 0.00, samples=1 01:20:06.978 lat (usec) : 100=88.55%, 250=11.40%, 500=0.05% 01:20:06.978 lat (msec) : 2=0.01% 01:20:06.978 cpu : usr=2.10%, sys=6.70%, ctx=10444, majf=0, minf=15 01:20:06.978 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:06.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:06.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:06.978 issued rwts: total=5120,5323,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:06.978 latency : target=0, window=0, percentile=100.00%, depth=1 01:20:06.978 job3: (groupid=0, jobs=1): err= 0: pid=69626: Mon Dec 9 05:14:58 2024 01:20:06.978 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 01:20:06.978 slat (nsec): min=4726, max=20800, avg=5225.83, stdev=948.26 01:20:06.978 clat (usec): min=82, max=761, avg=161.22, stdev=27.40 01:20:06.978 lat (usec): min=87, max=769, avg=166.44, stdev=27.43 01:20:06.978 clat percentiles (usec): 01:20:06.978 | 1.00th=[ 128], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 145], 01:20:06.978 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 157], 01:20:06.978 | 70.00th=[ 163], 80.00th=[ 182], 90.00th=[ 196], 95.00th=[ 206], 01:20:06.978 | 99.00th=[ 227], 99.50th=[ 260], 99.90th=[ 355], 99.95th=[ 545], 01:20:06.978 | 99.99th=[ 758] 01:20:06.978 write: IOPS=3469, BW=13.6MiB/s (14.2MB/s)(13.6MiB/1001msec); 0 zone resets 01:20:06.978 slat (nsec): min=6452, max=60255, avg=8967.21, stdev=4513.93 01:20:06.978 clat (usec): min=64, max=508, avg=130.62, stdev=23.31 01:20:06.978 lat (usec): min=75, max=516, avg=139.59, stdev=22.67 01:20:06.978 clat percentiles (usec): 01:20:06.978 | 1.00th=[ 93], 5.00th=[ 110], 10.00th=[ 112], 20.00th=[ 115], 01:20:06.978 | 30.00th=[ 118], 40.00th=[ 120], 50.00th=[ 123], 60.00th=[ 127], 01:20:06.978 | 70.00th=[ 137], 80.00th=[ 153], 90.00th=[ 163], 95.00th=[ 172], 01:20:06.978 | 99.00th=[ 192], 99.50th=[ 204], 99.90th=[ 269], 99.95th=[ 367], 01:20:06.978 | 99.99th=[ 510] 01:20:06.978 bw ( KiB/s): min=15448, max=15448, per=22.19%, avg=15448.00, stdev= 0.00, samples=1 01:20:06.978 iops : min= 3862, max= 3862, avg=3862.00, stdev= 0.00, samples=1 01:20:06.978 lat (usec) : 100=1.38%, 250=98.24%, 500=0.34%, 750=0.03%, 1000=0.02% 01:20:06.978 cpu : usr=0.30%, sys=4.20%, ctx=6545, majf=0, minf=11 01:20:06.978 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:06.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:06.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:06.978 issued rwts: total=3072,3473,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:06.978 latency : target=0, window=0, percentile=100.00%, depth=1 01:20:06.978 01:20:06.978 Run status group 0 (all jobs): 01:20:06.978 READ: bw=63.9MiB/s (67.0MB/s), 12.0MiB/s-20.0MiB/s (12.6MB/s-20.9MB/s), io=64.0MiB (67.1MB), run=1001-1001msec 01:20:06.978 WRITE: bw=68.0MiB/s (71.3MB/s), 13.5MiB/s-20.8MiB/s (14.2MB/s-21.8MB/s), io=68.0MiB (71.4MB), run=1001-1001msec 01:20:06.978 01:20:06.978 Disk stats (read/write): 01:20:06.978 nvme0n1: ios=4486/4608, merge=0/0, ticks=460/366, in_queue=826, util=89.78% 01:20:06.978 nvme0n2: ios=2875/3072, merge=0/0, ticks=470/391, in_queue=861, util=90.44% 01:20:06.978 nvme0n3: ios=4623/4608, merge=0/0, ticks=460/386, in_queue=846, util=90.16% 01:20:06.978 nvme0n4: ios=2833/3072, merge=0/0, ticks=455/405, in_queue=860, util=90.03% 01:20:06.978 05:14:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 01:20:06.978 [global] 01:20:06.978 thread=1 01:20:06.978 invalidate=1 01:20:06.978 rw=write 01:20:06.978 time_based=1 01:20:06.978 runtime=1 01:20:06.978 ioengine=libaio 01:20:06.978 direct=1 01:20:06.978 bs=4096 01:20:06.978 iodepth=128 01:20:06.978 norandommap=0 01:20:06.978 numjobs=1 01:20:06.978 01:20:06.978 verify_dump=1 01:20:06.978 verify_backlog=512 01:20:06.978 verify_state_save=0 01:20:06.978 do_verify=1 01:20:06.978 verify=crc32c-intel 01:20:06.978 [job0] 01:20:06.978 filename=/dev/nvme0n1 01:20:06.978 [job1] 01:20:06.978 filename=/dev/nvme0n2 01:20:06.978 [job2] 01:20:06.978 filename=/dev/nvme0n3 01:20:06.978 [job3] 01:20:06.978 filename=/dev/nvme0n4 01:20:06.978 Could not set queue depth (nvme0n1) 01:20:06.978 Could not set queue depth (nvme0n2) 01:20:06.978 Could not set queue depth (nvme0n3) 01:20:06.978 Could not set queue depth (nvme0n4) 01:20:06.978 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:20:06.978 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:20:06.978 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:20:06.978 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:20:06.978 fio-3.35 01:20:06.978 Starting 4 threads 01:20:08.349 01:20:08.349 job0: (groupid=0, jobs=1): err= 0: pid=69679: Mon Dec 9 05:14:59 2024 01:20:08.349 read: IOPS=8944, BW=34.9MiB/s (36.6MB/s)(35.0MiB/1001msec) 01:20:08.349 slat (usec): min=3, max=1690, avg=55.32, stdev=215.87 01:20:08.349 clat (usec): min=292, max=9153, avg=7094.86, stdev=630.77 01:20:08.349 lat (usec): min=1982, max=9159, avg=7150.18, stdev=625.95 01:20:08.349 clat percentiles (usec): 01:20:08.349 | 1.00th=[ 5669], 5.00th=[ 6194], 10.00th=[ 6456], 20.00th=[ 6652], 01:20:08.349 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7111], 60.00th=[ 7242], 01:20:08.349 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7767], 95.00th=[ 7963], 01:20:08.349 | 99.00th=[ 8356], 99.50th=[ 8455], 99.90th=[ 8717], 99.95th=[ 9110], 01:20:08.349 | 99.99th=[ 9110] 01:20:08.349 write: IOPS=9206, BW=36.0MiB/s (37.7MB/s)(36.0MiB/1001msec); 0 zone resets 01:20:08.349 slat (usec): min=5, max=1662, avg=51.09, stdev=196.26 01:20:08.349 clat (usec): min=5142, max=8566, avg=6862.73, stdev=461.37 01:20:08.349 lat (usec): min=5186, max=8578, avg=6913.82, stdev=428.06 01:20:08.349 clat percentiles (usec): 01:20:08.349 | 1.00th=[ 5473], 5.00th=[ 5866], 10.00th=[ 6456], 20.00th=[ 6652], 01:20:08.349 | 30.00th=[ 6718], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 6915], 01:20:08.349 | 70.00th=[ 7046], 80.00th=[ 7111], 90.00th=[ 7308], 95.00th=[ 7635], 01:20:08.349 | 99.00th=[ 8160], 99.50th=[ 8291], 99.90th=[ 8455], 99.95th=[ 8586], 01:20:08.349 | 99.99th=[ 8586] 01:20:08.349 bw ( KiB/s): min=36864, max=36864, per=38.09%, avg=36864.00, stdev= 0.00, samples=1 01:20:08.349 iops : min= 9216, max= 9216, avg=9216.00, stdev= 0.00, samples=1 01:20:08.349 lat (usec) : 500=0.01% 01:20:08.349 lat (msec) : 2=0.01%, 4=0.30%, 10=99.68% 01:20:08.349 cpu : usr=4.20%, sys=11.70%, ctx=1102, majf=0, minf=9 01:20:08.349 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 01:20:08.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:08.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:20:08.349 issued rwts: total=8953,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:08.349 latency : target=0, window=0, percentile=100.00%, depth=128 01:20:08.349 job1: (groupid=0, jobs=1): err= 0: pid=69680: Mon Dec 9 05:14:59 2024 01:20:08.349 read: IOPS=3457, BW=13.5MiB/s (14.2MB/s)(13.5MiB/1003msec) 01:20:08.349 slat (usec): min=5, max=7182, avg=120.81, stdev=568.85 01:20:08.349 clat (usec): min=794, max=27872, avg=14761.78, stdev=3102.42 01:20:08.349 lat (usec): min=2930, max=28789, avg=14882.58, stdev=3144.50 01:20:08.349 clat percentiles (usec): 01:20:08.349 | 1.00th=[ 4883], 5.00th=[10683], 10.00th=[11600], 20.00th=[12780], 01:20:08.349 | 30.00th=[13304], 40.00th=[13960], 50.00th=[14353], 60.00th=[14746], 01:20:08.349 | 70.00th=[15795], 80.00th=[16909], 90.00th=[18744], 95.00th=[19530], 01:20:08.349 | 99.00th=[24511], 99.50th=[25822], 99.90th=[27919], 99.95th=[27919], 01:20:08.349 | 99.99th=[27919] 01:20:08.349 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 01:20:08.349 slat (usec): min=8, max=5107, avg=157.79, stdev=565.72 01:20:08.349 clat (usec): min=7446, max=37636, avg=21034.47, stdev=7463.64 01:20:08.349 lat (usec): min=7461, max=37652, avg=21192.26, stdev=7514.83 01:20:08.349 clat percentiles (usec): 01:20:08.349 | 1.00th=[ 8848], 5.00th=[ 9110], 10.00th=[10552], 20.00th=[13698], 01:20:08.349 | 30.00th=[16319], 40.00th=[19268], 50.00th=[21365], 60.00th=[23462], 01:20:08.349 | 70.00th=[25035], 80.00th=[27395], 90.00th=[31851], 95.00th=[33817], 01:20:08.349 | 99.00th=[36439], 99.50th=[36963], 99.90th=[37487], 99.95th=[37487], 01:20:08.349 | 99.99th=[37487] 01:20:08.349 bw ( KiB/s): min=12432, max=16240, per=14.81%, avg=14336.00, stdev=2692.66, samples=2 01:20:08.349 iops : min= 3108, max= 4060, avg=3584.00, stdev=673.17, samples=2 01:20:08.349 lat (usec) : 1000=0.01% 01:20:08.349 lat (msec) : 4=0.34%, 10=5.59%, 20=63.88%, 50=30.18% 01:20:08.349 cpu : usr=2.20%, sys=5.99%, ctx=495, majf=0, minf=8 01:20:08.349 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 01:20:08.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:08.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:20:08.349 issued rwts: total=3468,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:08.349 latency : target=0, window=0, percentile=100.00%, depth=128 01:20:08.349 job2: (groupid=0, jobs=1): err= 0: pid=69681: Mon Dec 9 05:14:59 2024 01:20:08.349 read: IOPS=7349, BW=28.7MiB/s (30.1MB/s)(28.7MiB/1001msec) 01:20:08.349 slat (usec): min=3, max=4202, avg=68.02, stdev=336.28 01:20:08.349 clat (usec): min=640, max=14066, avg=8541.05, stdev=1219.39 01:20:08.349 lat (usec): min=2226, max=14161, avg=8609.07, stdev=1244.57 01:20:08.349 clat percentiles (usec): 01:20:08.349 | 1.00th=[ 5407], 5.00th=[ 6325], 10.00th=[ 7111], 20.00th=[ 8029], 01:20:08.349 | 30.00th=[ 8225], 40.00th=[ 8356], 50.00th=[ 8455], 60.00th=[ 8586], 01:20:08.349 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[10028], 95.00th=[10814], 01:20:08.349 | 99.00th=[11731], 99.50th=[11994], 99.90th=[12256], 99.95th=[12256], 01:20:08.349 | 99.99th=[14091] 01:20:08.349 write: IOPS=7672, BW=30.0MiB/s (31.4MB/s)(30.0MiB/1001msec); 0 zone resets 01:20:08.349 slat (usec): min=5, max=6356, avg=60.94, stdev=259.95 01:20:08.349 clat (usec): min=4530, max=13923, avg=8321.12, stdev=1065.87 01:20:08.349 lat (usec): min=4540, max=13943, avg=8382.06, stdev=1080.01 01:20:08.349 clat percentiles (usec): 01:20:08.349 | 1.00th=[ 5473], 5.00th=[ 6587], 10.00th=[ 7439], 20.00th=[ 7701], 01:20:08.349 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8356], 01:20:08.349 | 70.00th=[ 8586], 80.00th=[ 8717], 90.00th=[ 9372], 95.00th=[10683], 01:20:08.349 | 99.00th=[11731], 99.50th=[11994], 99.90th=[12125], 99.95th=[12125], 01:20:08.349 | 99.99th=[13960] 01:20:08.349 bw ( KiB/s): min=30480, max=31022, per=31.78%, avg=30751.00, stdev=383.25, samples=2 01:20:08.349 iops : min= 7620, max= 7755, avg=7687.50, stdev=95.46, samples=2 01:20:08.349 lat (usec) : 750=0.01% 01:20:08.349 lat (msec) : 4=0.16%, 10=90.74%, 20=9.09% 01:20:08.349 cpu : usr=3.70%, sys=11.70%, ctx=1008, majf=0, minf=11 01:20:08.349 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 01:20:08.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:08.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:20:08.349 issued rwts: total=7357,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:08.349 latency : target=0, window=0, percentile=100.00%, depth=128 01:20:08.349 job3: (groupid=0, jobs=1): err= 0: pid=69682: Mon Dec 9 05:14:59 2024 01:20:08.349 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 01:20:08.349 slat (usec): min=6, max=8825, avg=143.48, stdev=783.91 01:20:08.349 clat (usec): min=9668, max=32137, avg=18500.89, stdev=4588.92 01:20:08.349 lat (usec): min=11988, max=32150, avg=18644.37, stdev=4567.39 01:20:08.349 clat percentiles (usec): 01:20:08.349 | 1.00th=[11994], 5.00th=[13173], 10.00th=[13566], 20.00th=[14746], 01:20:08.349 | 30.00th=[15795], 40.00th=[16319], 50.00th=[16909], 60.00th=[17695], 01:20:08.349 | 70.00th=[20317], 80.00th=[22676], 90.00th=[26084], 95.00th=[27919], 01:20:08.349 | 99.00th=[28443], 99.50th=[32113], 99.90th=[32113], 99.95th=[32113], 01:20:08.349 | 99.99th=[32113] 01:20:08.349 write: IOPS=3793, BW=14.8MiB/s (15.5MB/s)(14.9MiB/1004msec); 0 zone resets 01:20:08.349 slat (usec): min=4, max=7909, avg=123.45, stdev=645.17 01:20:08.349 clat (usec): min=1143, max=33158, avg=15847.99, stdev=5261.16 01:20:08.349 lat (usec): min=4616, max=33174, avg=15971.44, stdev=5253.02 01:20:08.349 clat percentiles (usec): 01:20:08.349 | 1.00th=[ 8979], 5.00th=[11338], 10.00th=[11469], 20.00th=[11731], 01:20:08.349 | 30.00th=[12780], 40.00th=[13435], 50.00th=[14091], 60.00th=[14615], 01:20:08.349 | 70.00th=[16712], 80.00th=[19792], 90.00th=[22414], 95.00th=[27395], 01:20:08.349 | 99.00th=[31851], 99.50th=[33162], 99.90th=[33162], 99.95th=[33162], 01:20:08.349 | 99.99th=[33162] 01:20:08.349 bw ( KiB/s): min=14088, max=15360, per=15.22%, avg=14724.00, stdev=899.44, samples=2 01:20:08.349 iops : min= 3522, max= 3840, avg=3681.00, stdev=224.86, samples=2 01:20:08.349 lat (msec) : 2=0.01%, 10=1.30%, 20=74.50%, 50=24.19% 01:20:08.349 cpu : usr=2.09%, sys=6.18%, ctx=237, majf=0, minf=17 01:20:08.349 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 01:20:08.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:08.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:20:08.349 issued rwts: total=3584,3809,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:08.349 latency : target=0, window=0, percentile=100.00%, depth=128 01:20:08.349 01:20:08.349 Run status group 0 (all jobs): 01:20:08.349 READ: bw=90.9MiB/s (95.3MB/s), 13.5MiB/s-34.9MiB/s (14.2MB/s-36.6MB/s), io=91.3MiB (95.7MB), run=1001-1004msec 01:20:08.349 WRITE: bw=94.5MiB/s (99.1MB/s), 14.0MiB/s-36.0MiB/s (14.6MB/s-37.7MB/s), io=94.9MiB (99.5MB), run=1001-1004msec 01:20:08.349 01:20:08.349 Disk stats (read/write): 01:20:08.349 nvme0n1: ios=7936/8192, merge=0/0, ticks=13611/12400, in_queue=26011, util=90.68% 01:20:08.349 nvme0n2: ios=3121/3103, merge=0/0, ticks=15092/20726, in_queue=35818, util=91.05% 01:20:08.349 nvme0n3: ios=6639/6656, merge=0/0, ticks=27351/24580, in_queue=51931, util=90.16% 01:20:08.349 nvme0n4: ios=3078/3200, merge=0/0, ticks=14496/11867, in_queue=26363, util=90.03% 01:20:08.349 05:14:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 01:20:08.349 [global] 01:20:08.349 thread=1 01:20:08.349 invalidate=1 01:20:08.349 rw=randwrite 01:20:08.349 time_based=1 01:20:08.349 runtime=1 01:20:08.349 ioengine=libaio 01:20:08.349 direct=1 01:20:08.349 bs=4096 01:20:08.349 iodepth=128 01:20:08.349 norandommap=0 01:20:08.350 numjobs=1 01:20:08.350 01:20:08.350 verify_dump=1 01:20:08.350 verify_backlog=512 01:20:08.350 verify_state_save=0 01:20:08.350 do_verify=1 01:20:08.350 verify=crc32c-intel 01:20:08.350 [job0] 01:20:08.350 filename=/dev/nvme0n1 01:20:08.350 [job1] 01:20:08.350 filename=/dev/nvme0n2 01:20:08.350 [job2] 01:20:08.350 filename=/dev/nvme0n3 01:20:08.350 [job3] 01:20:08.350 filename=/dev/nvme0n4 01:20:08.350 Could not set queue depth (nvme0n1) 01:20:08.350 Could not set queue depth (nvme0n2) 01:20:08.350 Could not set queue depth (nvme0n3) 01:20:08.350 Could not set queue depth (nvme0n4) 01:20:08.350 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:20:08.350 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:20:08.350 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:20:08.350 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:20:08.350 fio-3.35 01:20:08.350 Starting 4 threads 01:20:09.284 01:20:09.284 job0: (groupid=0, jobs=1): err= 0: pid=69737: Mon Dec 9 05:15:00 2024 01:20:09.284 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 01:20:09.284 slat (nsec): min=1828, max=4770.1k, avg=101395.79, stdev=485438.65 01:20:09.284 clat (usec): min=5791, max=19694, avg=12805.04, stdev=2165.21 01:20:09.284 lat (usec): min=5798, max=20071, avg=12906.44, stdev=2199.37 01:20:09.284 clat percentiles (usec): 01:20:09.284 | 1.00th=[ 7570], 5.00th=[ 8717], 10.00th=[ 9765], 20.00th=[11076], 01:20:09.284 | 30.00th=[11994], 40.00th=[12518], 50.00th=[13042], 60.00th=[13435], 01:20:09.284 | 70.00th=[13829], 80.00th=[14484], 90.00th=[15664], 95.00th=[16188], 01:20:09.284 | 99.00th=[17433], 99.50th=[17433], 99.90th=[18744], 99.95th=[19006], 01:20:09.284 | 99.99th=[19792] 01:20:09.284 write: IOPS=5274, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1004msec); 0 zone resets 01:20:09.284 slat (usec): min=3, max=4982, avg=87.19, stdev=349.36 01:20:09.284 clat (usec): min=3308, max=20798, avg=11599.84, stdev=2578.75 01:20:09.284 lat (usec): min=3545, max=20807, avg=11687.03, stdev=2599.29 01:20:09.284 clat percentiles (usec): 01:20:09.284 | 1.00th=[ 6325], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[ 9765], 01:20:09.284 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10945], 60.00th=[11469], 01:20:09.284 | 70.00th=[12256], 80.00th=[13566], 90.00th=[15926], 95.00th=[16450], 01:20:09.284 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19268], 99.95th=[19792], 01:20:09.284 | 99.99th=[20841] 01:20:09.284 bw ( KiB/s): min=20521, max=20864, per=23.98%, avg=20692.50, stdev=242.54, samples=2 01:20:09.284 iops : min= 5130, max= 5216, avg=5173.00, stdev=60.81, samples=2 01:20:09.284 lat (msec) : 4=0.12%, 10=19.97%, 20=79.89%, 50=0.02% 01:20:09.284 cpu : usr=2.79%, sys=6.78%, ctx=1686, majf=0, minf=15 01:20:09.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 01:20:09.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:09.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:20:09.284 issued rwts: total=5120,5296,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:09.284 latency : target=0, window=0, percentile=100.00%, depth=128 01:20:09.284 job1: (groupid=0, jobs=1): err= 0: pid=69738: Mon Dec 9 05:15:00 2024 01:20:09.284 read: IOPS=5943, BW=23.2MiB/s (24.3MB/s)(23.3MiB/1005msec) 01:20:09.284 slat (nsec): min=1872, max=6147.0k, avg=91845.56, stdev=453325.82 01:20:09.284 clat (usec): min=377, max=21020, avg=11594.08, stdev=2984.62 01:20:09.284 lat (usec): min=3057, max=21032, avg=11685.93, stdev=3008.05 01:20:09.284 clat percentiles (usec): 01:20:09.284 | 1.00th=[ 5145], 5.00th=[ 5866], 10.00th=[ 6980], 20.00th=[ 8717], 01:20:09.284 | 30.00th=[10945], 40.00th=[11731], 50.00th=[12256], 60.00th=[12911], 01:20:09.284 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14615], 95.00th=[15401], 01:20:09.284 | 99.00th=[17695], 99.50th=[18482], 99.90th=[19792], 99.95th=[19792], 01:20:09.284 | 99.99th=[21103] 01:20:09.284 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 01:20:09.284 slat (usec): min=3, max=5312, avg=70.44, stdev=284.19 01:20:09.284 clat (usec): min=2287, max=20345, avg=9459.85, stdev=2508.50 01:20:09.284 lat (usec): min=2309, max=21083, avg=9530.29, stdev=2525.57 01:20:09.284 clat percentiles (usec): 01:20:09.284 | 1.00th=[ 3130], 5.00th=[ 5080], 10.00th=[ 6718], 20.00th=[ 7242], 01:20:09.284 | 30.00th=[ 7701], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[10290], 01:20:09.284 | 70.00th=[10814], 80.00th=[11469], 90.00th=[12125], 95.00th=[12911], 01:20:09.284 | 99.00th=[16909], 99.50th=[18744], 99.90th=[19792], 99.95th=[19792], 01:20:09.284 | 99.99th=[20317] 01:20:09.284 bw ( KiB/s): min=21208, max=27944, per=28.48%, avg=24576.00, stdev=4763.07, samples=2 01:20:09.284 iops : min= 5302, max= 6986, avg=6144.00, stdev=1190.77, samples=2 01:20:09.284 lat (usec) : 500=0.01% 01:20:09.284 lat (msec) : 4=1.51%, 10=38.74%, 20=59.73%, 50=0.02% 01:20:09.284 cpu : usr=2.69%, sys=8.37%, ctx=1820, majf=0, minf=8 01:20:09.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 01:20:09.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:09.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:20:09.284 issued rwts: total=5973,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:09.284 latency : target=0, window=0, percentile=100.00%, depth=128 01:20:09.284 job2: (groupid=0, jobs=1): err= 0: pid=69739: Mon Dec 9 05:15:00 2024 01:20:09.284 read: IOPS=5865, BW=22.9MiB/s (24.0MB/s)(23.0MiB/1004msec) 01:20:09.284 slat (usec): min=3, max=4277, avg=86.56, stdev=408.45 01:20:09.284 clat (usec): min=1087, max=19694, avg=11161.39, stdev=2494.36 01:20:09.284 lat (usec): min=3688, max=19709, avg=11247.94, stdev=2485.85 01:20:09.284 clat percentiles (usec): 01:20:09.284 | 1.00th=[ 6783], 5.00th=[ 7570], 10.00th=[ 8094], 20.00th=[ 8586], 01:20:09.284 | 30.00th=[ 9634], 40.00th=[10683], 50.00th=[11076], 60.00th=[11469], 01:20:09.284 | 70.00th=[12518], 80.00th=[13698], 90.00th=[14222], 95.00th=[14484], 01:20:09.284 | 99.00th=[17957], 99.50th=[19530], 99.90th=[19792], 99.95th=[19792], 01:20:09.284 | 99.99th=[19792] 01:20:09.284 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 01:20:09.284 slat (usec): min=6, max=3830, avg=75.89, stdev=312.16 01:20:09.284 clat (usec): min=6269, max=17976, avg=9972.07, stdev=2542.99 01:20:09.284 lat (usec): min=6366, max=17989, avg=10047.96, stdev=2545.92 01:20:09.284 clat percentiles (usec): 01:20:09.284 | 1.00th=[ 6587], 5.00th=[ 7373], 10.00th=[ 7898], 20.00th=[ 8094], 01:20:09.284 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9503], 01:20:09.284 | 70.00th=[10159], 80.00th=[12256], 90.00th=[13960], 95.00th=[15664], 01:20:09.284 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17957], 99.95th=[17957], 01:20:09.284 | 99.99th=[17957] 01:20:09.284 bw ( KiB/s): min=24576, max=24625, per=28.51%, avg=24600.50, stdev=34.65, samples=2 01:20:09.284 iops : min= 6144, max= 6156, avg=6150.00, stdev= 8.49, samples=2 01:20:09.284 lat (msec) : 2=0.01%, 4=0.21%, 10=47.95%, 20=51.83% 01:20:09.284 cpu : usr=2.49%, sys=9.87%, ctx=621, majf=0, minf=13 01:20:09.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 01:20:09.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:09.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:20:09.284 issued rwts: total=5889,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:09.284 latency : target=0, window=0, percentile=100.00%, depth=128 01:20:09.284 job3: (groupid=0, jobs=1): err= 0: pid=69740: Mon Dec 9 05:15:00 2024 01:20:09.284 read: IOPS=3592, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1002msec) 01:20:09.284 slat (usec): min=2, max=9199, avg=123.38, stdev=615.24 01:20:09.284 clat (usec): min=1230, max=28810, avg=15309.64, stdev=2746.62 01:20:09.284 lat (usec): min=1649, max=28820, avg=15433.03, stdev=2752.23 01:20:09.284 clat percentiles (usec): 01:20:09.284 | 1.00th=[10814], 5.00th=[12125], 10.00th=[12387], 20.00th=[13435], 01:20:09.284 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14746], 60.00th=[15533], 01:20:09.284 | 70.00th=[15926], 80.00th=[16712], 90.00th=[19268], 95.00th=[20579], 01:20:09.284 | 99.00th=[22676], 99.50th=[24511], 99.90th=[28705], 99.95th=[28705], 01:20:09.284 | 99.99th=[28705] 01:20:09.284 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 01:20:09.284 slat (usec): min=3, max=4771, avg=131.51, stdev=436.39 01:20:09.284 clat (usec): min=2282, max=33086, avg=17437.98, stdev=4890.65 01:20:09.284 lat (usec): min=2769, max=33105, avg=17569.49, stdev=4912.68 01:20:09.284 clat percentiles (usec): 01:20:09.284 | 1.00th=[ 7767], 5.00th=[10814], 10.00th=[12387], 20.00th=[13829], 01:20:09.284 | 30.00th=[14484], 40.00th=[15270], 50.00th=[16319], 60.00th=[17433], 01:20:09.284 | 70.00th=[19268], 80.00th=[20841], 90.00th=[25035], 95.00th=[27132], 01:20:09.284 | 99.00th=[30016], 99.50th=[31589], 99.90th=[33162], 99.95th=[33162], 01:20:09.284 | 99.99th=[33162] 01:20:09.284 bw ( KiB/s): min=15488, max=16359, per=18.45%, avg=15923.50, stdev=615.89, samples=2 01:20:09.284 iops : min= 3872, max= 4089, avg=3980.50, stdev=153.44, samples=2 01:20:09.284 lat (msec) : 2=0.12%, 4=0.21%, 10=0.91%, 20=81.52%, 50=17.24% 01:20:09.284 cpu : usr=1.30%, sys=7.39%, ctx=729, majf=0, minf=11 01:20:09.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 01:20:09.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:09.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:20:09.284 issued rwts: total=3600,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:09.284 latency : target=0, window=0, percentile=100.00%, depth=128 01:20:09.284 01:20:09.284 Run status group 0 (all jobs): 01:20:09.284 READ: bw=80.0MiB/s (83.9MB/s), 14.0MiB/s-23.2MiB/s (14.7MB/s-24.3MB/s), io=80.4MiB (84.3MB), run=1002-1005msec 01:20:09.284 WRITE: bw=84.3MiB/s (88.4MB/s), 16.0MiB/s-23.9MiB/s (16.7MB/s-25.1MB/s), io=84.7MiB (88.8MB), run=1002-1005msec 01:20:09.284 01:20:09.284 Disk stats (read/write): 01:20:09.284 nvme0n1: ios=4515/4608, merge=0/0, ticks=18801/16012, in_queue=34813, util=90.28% 01:20:09.284 nvme0n2: ios=5240/5632, merge=0/0, ticks=27331/24658, in_queue=51989, util=89.95% 01:20:09.284 nvme0n3: ios=5149/5504, merge=0/0, ticks=13822/12771, in_queue=26593, util=90.18% 01:20:09.284 nvme0n4: ios=3302/3584, merge=0/0, ticks=14582/15790, in_queue=30372, util=90.17% 01:20:09.284 05:15:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 01:20:09.544 05:15:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=69759 01:20:09.544 05:15:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 01:20:09.544 05:15:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 01:20:09.544 [global] 01:20:09.544 thread=1 01:20:09.544 invalidate=1 01:20:09.544 rw=read 01:20:09.544 time_based=1 01:20:09.544 runtime=10 01:20:09.544 ioengine=libaio 01:20:09.544 direct=1 01:20:09.544 bs=4096 01:20:09.544 iodepth=1 01:20:09.544 norandommap=1 01:20:09.544 numjobs=1 01:20:09.544 01:20:09.544 [job0] 01:20:09.544 filename=/dev/nvme0n1 01:20:09.544 [job1] 01:20:09.544 filename=/dev/nvme0n2 01:20:09.544 [job2] 01:20:09.544 filename=/dev/nvme0n3 01:20:09.544 [job3] 01:20:09.544 filename=/dev/nvme0n4 01:20:09.544 Could not set queue depth (nvme0n1) 01:20:09.544 Could not set queue depth (nvme0n2) 01:20:09.544 Could not set queue depth (nvme0n3) 01:20:09.544 Could not set queue depth (nvme0n4) 01:20:09.544 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:20:09.544 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:20:09.544 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:20:09.544 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:20:09.544 fio-3.35 01:20:09.544 Starting 4 threads 01:20:12.830 05:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 01:20:12.830 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=122249216, buflen=4096 01:20:12.830 fio: pid=69811, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:20:12.830 05:15:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 01:20:12.830 fio: pid=69808, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:20:12.830 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=47304704, buflen=4096 01:20:12.830 05:15:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:20:12.830 05:15:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 01:20:13.091 fio: pid=69799, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:20:13.091 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=53014528, buflen=4096 01:20:13.091 05:15:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:20:13.091 05:15:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 01:20:13.091 fio: pid=69800, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:20:13.091 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=58298368, buflen=4096 01:20:13.091 05:15:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:20:13.091 05:15:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 01:20:13.351 01:20:13.351 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69799: Mon Dec 9 05:15:04 2024 01:20:13.351 read: IOPS=3963, BW=15.5MiB/s (16.2MB/s)(50.6MiB/3266msec) 01:20:13.351 slat (usec): min=3, max=13500, avg= 9.99, stdev=179.51 01:20:13.351 clat (usec): min=82, max=4807, avg=241.62, stdev=84.00 01:20:13.351 lat (usec): min=103, max=13639, avg=251.61, stdev=197.45 01:20:13.351 clat percentiles (usec): 01:20:13.351 | 1.00th=[ 104], 5.00th=[ 112], 10.00th=[ 126], 20.00th=[ 198], 01:20:13.351 | 30.00th=[ 239], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 260], 01:20:13.351 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 314], 01:20:13.351 | 99.00th=[ 474], 99.50th=[ 578], 99.90th=[ 840], 99.95th=[ 1188], 01:20:13.351 | 99.99th=[ 1893] 01:20:13.351 bw ( KiB/s): min=13936, max=16960, per=19.27%, avg=15149.00, stdev=1024.65, samples=6 01:20:13.351 iops : min= 3484, max= 4240, avg=3787.17, stdev=256.12, samples=6 01:20:13.351 lat (usec) : 100=0.27%, 250=41.44%, 500=57.42%, 750=0.63%, 1000=0.15% 01:20:13.351 lat (msec) : 2=0.06%, 10=0.01% 01:20:13.351 cpu : usr=0.34%, sys=2.79%, ctx=12952, majf=0, minf=1 01:20:13.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:13.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:13.351 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:13.351 issued rwts: total=12944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:13.351 latency : target=0, window=0, percentile=100.00%, depth=1 01:20:13.351 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69800: Mon Dec 9 05:15:04 2024 01:20:13.351 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(55.6MiB/3489msec) 01:20:13.351 slat (usec): min=4, max=13151, avg=11.89, stdev=204.05 01:20:13.351 clat (usec): min=66, max=120991, avg=232.60, stdev=1392.26 01:20:13.351 lat (usec): min=73, max=121010, avg=244.48, stdev=1406.94 01:20:13.351 clat percentiles (usec): 01:20:13.351 | 1.00th=[ 74], 5.00th=[ 80], 10.00th=[ 84], 20.00th=[ 123], 01:20:13.351 | 30.00th=[ 200], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 253], 01:20:13.351 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 285], 01:20:13.351 | 99.00th=[ 334], 99.50th=[ 408], 99.90th=[ 832], 99.95th=[ 1172], 01:20:13.351 | 99.99th=[113771] 01:20:13.351 bw ( KiB/s): min=12918, max=18168, per=19.61%, avg=15418.33, stdev=1678.12, samples=6 01:20:13.351 iops : min= 3229, max= 4542, avg=3854.50, stdev=419.68, samples=6 01:20:13.351 lat (usec) : 100=17.80%, 250=35.10%, 500=46.82%, 750=0.07%, 1000=0.14% 01:20:13.351 lat (msec) : 2=0.04%, 4=0.01%, 250=0.01% 01:20:13.351 cpu : usr=0.32%, sys=2.95%, ctx=14241, majf=0, minf=1 01:20:13.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:13.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:13.351 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:13.351 issued rwts: total=14234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:13.351 latency : target=0, window=0, percentile=100.00%, depth=1 01:20:13.351 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69808: Mon Dec 9 05:15:04 2024 01:20:13.351 read: IOPS=3760, BW=14.7MiB/s (15.4MB/s)(45.1MiB/3071msec) 01:20:13.351 slat (usec): min=3, max=11779, avg= 9.21, stdev=131.45 01:20:13.351 clat (usec): min=111, max=1932, avg=255.91, stdev=59.86 01:20:13.351 lat (usec): min=120, max=11970, avg=265.12, stdev=144.76 01:20:13.351 clat percentiles (usec): 01:20:13.351 | 1.00th=[ 157], 5.00th=[ 188], 10.00th=[ 196], 20.00th=[ 229], 01:20:13.351 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 262], 01:20:13.351 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 314], 01:20:13.351 | 99.00th=[ 498], 99.50th=[ 594], 99.90th=[ 824], 99.95th=[ 1004], 01:20:13.351 | 99.99th=[ 1565] 01:20:13.351 bw ( KiB/s): min=13944, max=16960, per=19.17%, avg=15074.60, stdev=1121.00, samples=5 01:20:13.351 iops : min= 3486, max= 4240, avg=3768.60, stdev=280.27, samples=5 01:20:13.351 lat (usec) : 250=36.19%, 500=62.84%, 750=0.72%, 1000=0.19% 01:20:13.351 lat (msec) : 2=0.05% 01:20:13.351 cpu : usr=0.33%, sys=2.83%, ctx=11559, majf=0, minf=2 01:20:13.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:13.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:13.351 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:13.351 issued rwts: total=11550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:13.351 latency : target=0, window=0, percentile=100.00%, depth=1 01:20:13.351 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69811: Mon Dec 9 05:15:04 2024 01:20:13.351 read: IOPS=10.4k, BW=40.6MiB/s (42.6MB/s)(117MiB/2873msec) 01:20:13.351 slat (nsec): min=4502, max=70568, avg=5276.02, stdev=1770.64 01:20:13.351 clat (usec): min=73, max=1266, avg=90.59, stdev=11.07 01:20:13.351 lat (usec): min=78, max=1271, avg=95.86, stdev=11.34 01:20:13.351 clat percentiles (usec): 01:20:13.351 | 1.00th=[ 80], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 86], 01:20:13.351 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 90], 60.00th=[ 91], 01:20:13.351 | 70.00th=[ 93], 80.00th=[ 95], 90.00th=[ 99], 95.00th=[ 102], 01:20:13.351 | 99.00th=[ 113], 99.50th=[ 117], 99.90th=[ 137], 99.95th=[ 258], 01:20:13.351 | 99.99th=[ 412] 01:20:13.351 bw ( KiB/s): min=39776, max=42104, per=52.79%, avg=41504.00, stdev=975.10, samples=5 01:20:13.351 iops : min= 9944, max=10526, avg=10376.00, stdev=243.77, samples=5 01:20:13.351 lat (usec) : 100=92.50%, 250=7.44%, 500=0.05% 01:20:13.351 lat (msec) : 2=0.01% 01:20:13.351 cpu : usr=0.70%, sys=4.91%, ctx=29847, majf=0, minf=2 01:20:13.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:20:13.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:13.351 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:20:13.351 issued rwts: total=29847,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:20:13.351 latency : target=0, window=0, percentile=100.00%, depth=1 01:20:13.351 01:20:13.351 Run status group 0 (all jobs): 01:20:13.351 READ: bw=76.8MiB/s (80.5MB/s), 14.7MiB/s-40.6MiB/s (15.4MB/s-42.6MB/s), io=268MiB (281MB), run=2873-3489msec 01:20:13.351 01:20:13.351 Disk stats (read/write): 01:20:13.351 nvme0n1: ios=11841/0, merge=0/0, ticks=2924/0, in_queue=2924, util=95.26% 01:20:13.351 nvme0n2: ios=14187/0, merge=0/0, ticks=3213/0, in_queue=3213, util=95.27% 01:20:13.351 nvme0n3: ios=10731/0, merge=0/0, ticks=2729/0, in_queue=2729, util=96.61% 01:20:13.351 nvme0n4: ios=27250/0, merge=0/0, ticks=2514/0, in_queue=2514, util=96.44% 01:20:13.351 05:15:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:20:13.351 05:15:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 01:20:13.615 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:20:13.615 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 01:20:13.876 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:20:13.876 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 01:20:14.136 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:20:14.136 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 01:20:14.396 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 01:20:14.396 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 69759 01:20:14.396 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 01:20:14.396 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:20:14.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:20:14.396 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:20:14.396 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 01:20:14.396 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:20:14.396 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:20:14.396 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:20:14.396 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:20:14.396 nvmf hotplug test: fio failed as expected 01:20:14.396 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 01:20:14.396 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 01:20:14.396 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 01:20:14.396 05:15:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:20:14.396 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 01:20:14.396 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:20:14.659 rmmod nvme_tcp 01:20:14.659 rmmod nvme_fabrics 01:20:14.659 rmmod nvme_keyring 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 69273 ']' 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 69273 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 69273 ']' 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 69273 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69273 01:20:14.659 killing process with pid 69273 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69273' 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 69273 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 69273 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:20:14.659 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 01:20:15.016 01:20:15.016 real 0m17.726s 01:20:15.016 user 1m7.823s 01:20:15.016 sys 0m6.277s 01:20:15.016 ************************************ 01:20:15.016 END TEST nvmf_fio_target 01:20:15.016 ************************************ 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:20:15.016 ************************************ 01:20:15.016 START TEST nvmf_bdevio 01:20:15.016 ************************************ 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 01:20:15.016 * Looking for test storage... 01:20:15.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 01:20:15.016 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:20:15.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:15.277 --rc genhtml_branch_coverage=1 01:20:15.277 --rc genhtml_function_coverage=1 01:20:15.277 --rc genhtml_legend=1 01:20:15.277 --rc geninfo_all_blocks=1 01:20:15.277 --rc geninfo_unexecuted_blocks=1 01:20:15.277 01:20:15.277 ' 01:20:15.277 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:20:15.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:15.278 --rc genhtml_branch_coverage=1 01:20:15.278 --rc genhtml_function_coverage=1 01:20:15.278 --rc genhtml_legend=1 01:20:15.278 --rc geninfo_all_blocks=1 01:20:15.278 --rc geninfo_unexecuted_blocks=1 01:20:15.278 01:20:15.278 ' 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:20:15.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:15.278 --rc genhtml_branch_coverage=1 01:20:15.278 --rc genhtml_function_coverage=1 01:20:15.278 --rc genhtml_legend=1 01:20:15.278 --rc geninfo_all_blocks=1 01:20:15.278 --rc geninfo_unexecuted_blocks=1 01:20:15.278 01:20:15.278 ' 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:20:15.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:15.278 --rc genhtml_branch_coverage=1 01:20:15.278 --rc genhtml_function_coverage=1 01:20:15.278 --rc genhtml_legend=1 01:20:15.278 --rc geninfo_all_blocks=1 01:20:15.278 --rc geninfo_unexecuted_blocks=1 01:20:15.278 01:20:15.278 ' 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:20:15.278 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:20:15.278 Cannot find device "nvmf_init_br" 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:20:15.278 Cannot find device "nvmf_init_br2" 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:20:15.278 Cannot find device "nvmf_tgt_br" 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:20:15.278 Cannot find device "nvmf_tgt_br2" 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 01:20:15.278 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:20:15.279 Cannot find device "nvmf_init_br" 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:20:15.279 Cannot find device "nvmf_init_br2" 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:20:15.279 Cannot find device "nvmf_tgt_br" 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:20:15.279 Cannot find device "nvmf_tgt_br2" 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:20:15.279 Cannot find device "nvmf_br" 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:20:15.279 Cannot find device "nvmf_init_if" 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:20:15.279 Cannot find device "nvmf_init_if2" 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:20:15.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:20:15.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:20:15.279 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:20:15.540 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:20:15.540 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 01:20:15.540 01:20:15.540 --- 10.0.0.3 ping statistics --- 01:20:15.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:15.540 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:20:15.540 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:20:15.540 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 01:20:15.540 01:20:15.540 --- 10.0.0.4 ping statistics --- 01:20:15.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:15.540 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:20:15.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:20:15.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 01:20:15.540 01:20:15.540 --- 10.0.0.1 ping statistics --- 01:20:15.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:15.540 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:20:15.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:20:15.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 01:20:15.540 01:20:15.540 --- 10.0.0.2 ping statistics --- 01:20:15.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:15.540 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:20:15.540 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:20:15.541 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:20:15.541 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:20:15.541 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:20:15.541 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:20:15.541 05:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 01:20:15.541 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:20:15.541 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 01:20:15.541 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:20:15.541 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=70173 01:20:15.541 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 70173 01:20:15.541 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 01:20:15.541 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 70173 ']' 01:20:15.541 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:20:15.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:20:15.541 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 01:20:15.541 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:20:15.541 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 01:20:15.541 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:20:15.541 [2024-12-09 05:15:07.041924] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:20:15.541 [2024-12-09 05:15:07.041977] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:20:15.802 [2024-12-09 05:15:07.181970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:20:15.802 [2024-12-09 05:15:07.214097] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:20:15.802 [2024-12-09 05:15:07.214128] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:20:15.802 [2024-12-09 05:15:07.214134] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:20:15.802 [2024-12-09 05:15:07.214138] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:20:15.802 [2024-12-09 05:15:07.214141] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:20:15.802 [2024-12-09 05:15:07.214865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:20:15.802 [2024-12-09 05:15:07.215087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 01:20:15.802 [2024-12-09 05:15:07.215245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 01:20:15.802 [2024-12-09 05:15:07.215279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:20:16.370 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:20:16.370 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 01:20:16.370 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:20:16.370 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 01:20:16.370 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:20:16.370 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:20:16.370 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:20:16.370 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:16.370 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:20:16.370 [2024-12-09 05:15:07.970333] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:20:16.370 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:16.370 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:20:16.370 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:16.370 05:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:20:16.630 Malloc0 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:20:16.630 [2024-12-09 05:15:08.032134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:20:16.630 { 01:20:16.630 "params": { 01:20:16.630 "name": "Nvme$subsystem", 01:20:16.630 "trtype": "$TEST_TRANSPORT", 01:20:16.630 "traddr": "$NVMF_FIRST_TARGET_IP", 01:20:16.630 "adrfam": "ipv4", 01:20:16.630 "trsvcid": "$NVMF_PORT", 01:20:16.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:20:16.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:20:16.630 "hdgst": ${hdgst:-false}, 01:20:16.630 "ddgst": ${ddgst:-false} 01:20:16.630 }, 01:20:16.630 "method": "bdev_nvme_attach_controller" 01:20:16.630 } 01:20:16.630 EOF 01:20:16.630 )") 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 01:20:16.630 05:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:20:16.630 "params": { 01:20:16.630 "name": "Nvme1", 01:20:16.630 "trtype": "tcp", 01:20:16.630 "traddr": "10.0.0.3", 01:20:16.630 "adrfam": "ipv4", 01:20:16.630 "trsvcid": "4420", 01:20:16.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:20:16.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:20:16.630 "hdgst": false, 01:20:16.630 "ddgst": false 01:20:16.630 }, 01:20:16.630 "method": "bdev_nvme_attach_controller" 01:20:16.630 }' 01:20:16.630 [2024-12-09 05:15:08.074474] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:20:16.631 [2024-12-09 05:15:08.074899] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70227 ] 01:20:16.631 [2024-12-09 05:15:08.217720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:20:16.893 [2024-12-09 05:15:08.271970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:20:16.893 [2024-12-09 05:15:08.272600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:20:16.893 [2024-12-09 05:15:08.272613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:20:16.893 I/O targets: 01:20:16.893 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 01:20:16.893 01:20:16.893 01:20:16.893 CUnit - A unit testing framework for C - Version 2.1-3 01:20:16.893 http://cunit.sourceforge.net/ 01:20:16.893 01:20:16.893 01:20:16.893 Suite: bdevio tests on: Nvme1n1 01:20:16.893 Test: blockdev write read block ...passed 01:20:17.151 Test: blockdev write zeroes read block ...passed 01:20:17.151 Test: blockdev write zeroes read no split ...passed 01:20:17.151 Test: blockdev write zeroes read split ...passed 01:20:17.151 Test: blockdev write zeroes read split partial ...passed 01:20:17.151 Test: blockdev reset ...[2024-12-09 05:15:08.568765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:20:17.151 [2024-12-09 05:15:08.568874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f7f50 (9): Bad file descriptor 01:20:17.151 passed 01:20:17.151 Test: blockdev write read 8 blocks ...[2024-12-09 05:15:08.588704] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 01:20:17.151 passed 01:20:17.151 Test: blockdev write read size > 128k ...passed 01:20:17.151 Test: blockdev write read invalid size ...passed 01:20:17.151 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:20:17.151 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:20:17.151 Test: blockdev write read max offset ...passed 01:20:17.151 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:20:17.151 Test: blockdev writev readv 8 blocks ...passed 01:20:17.151 Test: blockdev writev readv 30 x 1block ...passed 01:20:17.151 Test: blockdev writev readv block ...passed 01:20:17.151 Test: blockdev writev readv size > 128k ...passed 01:20:17.151 Test: blockdev writev readv size > 128k in two iovs ...passed 01:20:17.151 Test: blockdev comparev and writev ...[2024-12-09 05:15:08.764005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:20:17.151 [2024-12-09 05:15:08.764054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:20:17.151 [2024-12-09 05:15:08.764069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:20:17.151 [2024-12-09 05:15:08.764076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:20:17.151 [2024-12-09 05:15:08.764335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:20:17.151 [2024-12-09 05:15:08.764344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:20:17.151 [2024-12-09 05:15:08.764356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:20:17.151 [2024-12-09 05:15:08.764363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:20:17.151 [2024-12-09 05:15:08.764751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:20:17.151 [2024-12-09 05:15:08.764770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:20:17.151 [2024-12-09 05:15:08.764781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:20:17.151 [2024-12-09 05:15:08.764788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:20:17.151 [2024-12-09 05:15:08.765043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:20:17.151 [2024-12-09 05:15:08.765052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:20:17.151 [2024-12-09 05:15:08.765063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:20:17.151 [2024-12-09 05:15:08.765069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:20:17.410 passed 01:20:17.410 Test: blockdev nvme passthru rw ...passed 01:20:17.410 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:15:08.847791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:20:17.410 [2024-12-09 05:15:08.847814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:20:17.410 [2024-12-09 05:15:08.847917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:20:17.410 [2024-12-09 05:15:08.847926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:20:17.410 [2024-12-09 05:15:08.848005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:20:17.410 [2024-12-09 05:15:08.848013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:20:17.410 [2024-12-09 05:15:08.848090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:20:17.410 [2024-12-09 05:15:08.848097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:20:17.410 passed 01:20:17.410 Test: blockdev nvme admin passthru ...passed 01:20:17.410 Test: blockdev copy ...passed 01:20:17.410 01:20:17.410 Run Summary: Type Total Ran Passed Failed Inactive 01:20:17.410 suites 1 1 n/a 0 0 01:20:17.410 tests 23 23 23 0 0 01:20:17.410 asserts 152 152 152 0 n/a 01:20:17.410 01:20:17.410 Elapsed time = 0.903 seconds 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:20:17.670 rmmod nvme_tcp 01:20:17.670 rmmod nvme_fabrics 01:20:17.670 rmmod nvme_keyring 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 70173 ']' 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 70173 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 70173 ']' 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 70173 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70173 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 01:20:17.670 killing process with pid 70173 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70173' 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 70173 01:20:17.670 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 70173 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:20:17.930 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 01:20:18.191 01:20:18.191 real 0m3.037s 01:20:18.191 user 0m10.336s 01:20:18.191 sys 0m0.679s 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:20:18.191 ************************************ 01:20:18.191 END TEST nvmf_bdevio 01:20:18.191 ************************************ 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 01:20:18.191 01:20:18.191 real 3m21.217s 01:20:18.191 user 10m40.182s 01:20:18.191 sys 0m46.826s 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:20:18.191 ************************************ 01:20:18.191 END TEST nvmf_target_core 01:20:18.191 ************************************ 01:20:18.191 05:15:09 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 01:20:18.191 05:15:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:20:18.191 05:15:09 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:18.191 05:15:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:20:18.191 ************************************ 01:20:18.191 START TEST nvmf_target_extra 01:20:18.191 ************************************ 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 01:20:18.191 * Looking for test storage... 01:20:18.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:20:18.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:18.191 --rc genhtml_branch_coverage=1 01:20:18.191 --rc genhtml_function_coverage=1 01:20:18.191 --rc genhtml_legend=1 01:20:18.191 --rc geninfo_all_blocks=1 01:20:18.191 --rc geninfo_unexecuted_blocks=1 01:20:18.191 01:20:18.191 ' 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:20:18.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:18.191 --rc genhtml_branch_coverage=1 01:20:18.191 --rc genhtml_function_coverage=1 01:20:18.191 --rc genhtml_legend=1 01:20:18.191 --rc geninfo_all_blocks=1 01:20:18.191 --rc geninfo_unexecuted_blocks=1 01:20:18.191 01:20:18.191 ' 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:20:18.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:18.191 --rc genhtml_branch_coverage=1 01:20:18.191 --rc genhtml_function_coverage=1 01:20:18.191 --rc genhtml_legend=1 01:20:18.191 --rc geninfo_all_blocks=1 01:20:18.191 --rc geninfo_unexecuted_blocks=1 01:20:18.191 01:20:18.191 ' 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:20:18.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:18.191 --rc genhtml_branch_coverage=1 01:20:18.191 --rc genhtml_function_coverage=1 01:20:18.191 --rc genhtml_legend=1 01:20:18.191 --rc geninfo_all_blocks=1 01:20:18.191 --rc geninfo_unexecuted_blocks=1 01:20:18.191 01:20:18.191 ' 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:20:18.191 05:15:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:18.192 05:15:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:18.192 05:15:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:18.192 05:15:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 01:20:18.192 05:15:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:18.192 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 01:20:18.192 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:20:18.192 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:20:18.192 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:20:18.192 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:20:18.192 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:20:18.192 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:20:18.192 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:20:18.192 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:20:18.192 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:20:18.192 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:20:18.452 ************************************ 01:20:18.452 START TEST nvmf_example 01:20:18.452 ************************************ 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 01:20:18.452 * Looking for test storage... 01:20:18.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:20:18.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:18.452 --rc genhtml_branch_coverage=1 01:20:18.452 --rc genhtml_function_coverage=1 01:20:18.452 --rc genhtml_legend=1 01:20:18.452 --rc geninfo_all_blocks=1 01:20:18.452 --rc geninfo_unexecuted_blocks=1 01:20:18.452 01:20:18.452 ' 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:20:18.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:18.452 --rc genhtml_branch_coverage=1 01:20:18.452 --rc genhtml_function_coverage=1 01:20:18.452 --rc genhtml_legend=1 01:20:18.452 --rc geninfo_all_blocks=1 01:20:18.452 --rc geninfo_unexecuted_blocks=1 01:20:18.452 01:20:18.452 ' 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:20:18.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:18.452 --rc genhtml_branch_coverage=1 01:20:18.452 --rc genhtml_function_coverage=1 01:20:18.452 --rc genhtml_legend=1 01:20:18.452 --rc geninfo_all_blocks=1 01:20:18.452 --rc geninfo_unexecuted_blocks=1 01:20:18.452 01:20:18.452 ' 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:20:18.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:18.452 --rc genhtml_branch_coverage=1 01:20:18.452 --rc genhtml_function_coverage=1 01:20:18.452 --rc genhtml_legend=1 01:20:18.452 --rc geninfo_all_blocks=1 01:20:18.452 --rc geninfo_unexecuted_blocks=1 01:20:18.452 01:20:18.452 ' 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 01:20:18.452 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:20:18.453 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@460 -- # nvmf_veth_init 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:20:18.453 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:20:18.453 Cannot find device "nvmf_init_br" 01:20:18.453 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 01:20:18.453 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:20:18.453 Cannot find device "nvmf_init_br2" 01:20:18.453 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 01:20:18.453 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:20:18.453 Cannot find device "nvmf_tgt_br" 01:20:18.453 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true 01:20:18.453 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:20:18.453 Cannot find device "nvmf_tgt_br2" 01:20:18.453 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true 01:20:18.453 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:20:18.453 Cannot find device "nvmf_init_br" 01:20:18.453 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true 01:20:18.453 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:20:18.453 Cannot find device "nvmf_init_br2" 01:20:18.453 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true 01:20:18.453 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:20:18.453 Cannot find device "nvmf_tgt_br" 01:20:18.453 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true 01:20:18.453 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:20:18.714 Cannot find device "nvmf_tgt_br2" 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:20:18.714 Cannot find device "nvmf_br" 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:20:18.714 Cannot find device "nvmf_init_if" 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:20:18.714 Cannot find device "nvmf_init_if2" 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:20:18.714 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:20:18.714 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:20:18.714 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:20:18.714 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 01:20:18.714 01:20:18.714 --- 10.0.0.3 ping statistics --- 01:20:18.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:18.714 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:20:18.714 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:20:18.714 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.025 ms 01:20:18.714 01:20:18.714 --- 10.0.0.4 ping statistics --- 01:20:18.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:18.714 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:20:18.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:20:18.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.013 ms 01:20:18.714 01:20:18.714 --- 10.0.0.1 ping statistics --- 01:20:18.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:18.714 rtt min/avg/max/mdev = 0.013/0.013/0.013/0.000 ms 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:20:18.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:20:18.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 01:20:18.714 01:20:18.714 --- 10.0.0.2 ping statistics --- 01:20:18.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:18.714 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@461 -- # return 0 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 01:20:18.714 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 01:20:18.715 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 01:20:18.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:20:18.715 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=70520 01:20:18.715 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:20:18.715 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 70520 01:20:18.715 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 70520 ']' 01:20:18.715 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:20:18.715 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 01:20:18.715 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 01:20:18.715 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:20:18.715 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 01:20:18.715 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 01:20:19.656 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:20:19.656 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 01:20:19.656 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 01:20:19.656 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 01:20:19.656 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 01:20:19.656 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:20:19.656 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:19.656 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 01:20:19.656 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:19.656 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 01:20:19.656 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:19.656 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 01:20:19.916 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:19.916 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 01:20:19.916 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:20:19.916 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:19.916 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 01:20:19.916 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:19.916 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 01:20:19.916 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:20:19.916 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:19.916 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 01:20:19.916 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:19.916 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:20:19.916 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:19.916 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 01:20:19.916 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:19.916 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 01:20:19.916 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 01:20:32.140 Initializing NVMe Controllers 01:20:32.140 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:20:32.140 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:20:32.140 Initialization complete. Launching workers. 01:20:32.140 ======================================================== 01:20:32.140 Latency(us) 01:20:32.140 Device Information : IOPS MiB/s Average min max 01:20:32.140 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19270.80 75.28 3321.91 556.29 23059.70 01:20:32.140 ======================================================== 01:20:32.140 Total : 19270.80 75.28 3321.91 556.29 23059.70 01:20:32.140 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:20:32.140 rmmod nvme_tcp 01:20:32.140 rmmod nvme_fabrics 01:20:32.140 rmmod nvme_keyring 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 70520 ']' 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 70520 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 70520 ']' 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 70520 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70520 01:20:32.140 killing process with pid 70520 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70520' 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 70520 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 70520 01:20:32.140 nvmf threads initialize successfully 01:20:32.140 bdev subsystem init successfully 01:20:32.140 created a nvmf target service 01:20:32.140 create targets's poll groups done 01:20:32.140 all subsystems of target started 01:20:32.140 nvmf target is running 01:20:32.140 all subsystems of target stopped 01:20:32.140 destroy targets's poll groups done 01:20:32.140 destroyed the nvmf target service 01:20:32.140 bdev subsystem finish successfully 01:20:32.140 nvmf threads destroy successfully 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:20:32.140 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:20:32.140 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:20:32.140 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:20:32.140 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns 01:20:32.140 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:20:32.140 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:20:32.140 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:20:32.140 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0 01:20:32.140 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 01:20:32.140 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 01:20:32.140 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 01:20:32.140 01:20:32.140 real 0m12.272s 01:20:32.140 user 0m44.451s 01:20:32.140 sys 0m1.389s 01:20:32.140 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:32.140 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 01:20:32.140 ************************************ 01:20:32.140 END TEST nvmf_example 01:20:32.140 ************************************ 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:20:32.141 ************************************ 01:20:32.141 START TEST nvmf_filesystem 01:20:32.141 ************************************ 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 01:20:32.141 * Looking for test storage... 01:20:32.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:20:32.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:32.141 --rc genhtml_branch_coverage=1 01:20:32.141 --rc genhtml_function_coverage=1 01:20:32.141 --rc genhtml_legend=1 01:20:32.141 --rc geninfo_all_blocks=1 01:20:32.141 --rc geninfo_unexecuted_blocks=1 01:20:32.141 01:20:32.141 ' 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:20:32.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:32.141 --rc genhtml_branch_coverage=1 01:20:32.141 --rc genhtml_function_coverage=1 01:20:32.141 --rc genhtml_legend=1 01:20:32.141 --rc geninfo_all_blocks=1 01:20:32.141 --rc geninfo_unexecuted_blocks=1 01:20:32.141 01:20:32.141 ' 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:20:32.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:32.141 --rc genhtml_branch_coverage=1 01:20:32.141 --rc genhtml_function_coverage=1 01:20:32.141 --rc genhtml_legend=1 01:20:32.141 --rc geninfo_all_blocks=1 01:20:32.141 --rc geninfo_unexecuted_blocks=1 01:20:32.141 01:20:32.141 ' 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:20:32.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:32.141 --rc genhtml_branch_coverage=1 01:20:32.141 --rc genhtml_function_coverage=1 01:20:32.141 --rc genhtml_legend=1 01:20:32.141 --rc geninfo_all_blocks=1 01:20:32.141 --rc geninfo_unexecuted_blocks=1 01:20:32.141 01:20:32.141 ' 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 01:20:32.141 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=y 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=y 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 01:20:32.142 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 01:20:32.142 #define SPDK_CONFIG_H 01:20:32.142 #define SPDK_CONFIG_AIO_FSDEV 1 01:20:32.142 #define SPDK_CONFIG_APPS 1 01:20:32.142 #define SPDK_CONFIG_ARCH native 01:20:32.142 #undef SPDK_CONFIG_ASAN 01:20:32.142 #define SPDK_CONFIG_AVAHI 1 01:20:32.142 #undef SPDK_CONFIG_CET 01:20:32.142 #define SPDK_CONFIG_COPY_FILE_RANGE 1 01:20:32.142 #define SPDK_CONFIG_COVERAGE 1 01:20:32.142 #define SPDK_CONFIG_CROSS_PREFIX 01:20:32.142 #undef SPDK_CONFIG_CRYPTO 01:20:32.142 #undef SPDK_CONFIG_CRYPTO_MLX5 01:20:32.142 #undef SPDK_CONFIG_CUSTOMOCF 01:20:32.142 #undef SPDK_CONFIG_DAOS 01:20:32.142 #define SPDK_CONFIG_DAOS_DIR 01:20:32.142 #define SPDK_CONFIG_DEBUG 1 01:20:32.142 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 01:20:32.142 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 01:20:32.142 #define SPDK_CONFIG_DPDK_INC_DIR 01:20:32.142 #define SPDK_CONFIG_DPDK_LIB_DIR 01:20:32.142 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 01:20:32.142 #undef SPDK_CONFIG_DPDK_UADK 01:20:32.142 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 01:20:32.142 #define SPDK_CONFIG_EXAMPLES 1 01:20:32.142 #undef SPDK_CONFIG_FC 01:20:32.142 #define SPDK_CONFIG_FC_PATH 01:20:32.142 #define SPDK_CONFIG_FIO_PLUGIN 1 01:20:32.142 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 01:20:32.142 #define SPDK_CONFIG_FSDEV 1 01:20:32.142 #undef SPDK_CONFIG_FUSE 01:20:32.142 #undef SPDK_CONFIG_FUZZER 01:20:32.142 #define SPDK_CONFIG_FUZZER_LIB 01:20:32.142 #define SPDK_CONFIG_GOLANG 1 01:20:32.142 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 01:20:32.142 #define SPDK_CONFIG_HAVE_EVP_MAC 1 01:20:32.142 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 01:20:32.142 #define SPDK_CONFIG_HAVE_KEYUTILS 1 01:20:32.142 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 01:20:32.142 #undef SPDK_CONFIG_HAVE_LIBBSD 01:20:32.142 #undef SPDK_CONFIG_HAVE_LZ4 01:20:32.142 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 01:20:32.142 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 01:20:32.142 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 01:20:32.142 #define SPDK_CONFIG_IDXD 1 01:20:32.142 #define SPDK_CONFIG_IDXD_KERNEL 1 01:20:32.142 #undef SPDK_CONFIG_IPSEC_MB 01:20:32.142 #define SPDK_CONFIG_IPSEC_MB_DIR 01:20:32.142 #define SPDK_CONFIG_ISAL 1 01:20:32.142 #define SPDK_CONFIG_ISAL_CRYPTO 1 01:20:32.142 #define SPDK_CONFIG_ISCSI_INITIATOR 1 01:20:32.142 #define SPDK_CONFIG_LIBDIR 01:20:32.142 #undef SPDK_CONFIG_LTO 01:20:32.142 #define SPDK_CONFIG_MAX_LCORES 128 01:20:32.142 #define SPDK_CONFIG_MAX_NUMA_NODES 1 01:20:32.142 #define SPDK_CONFIG_NVME_CUSE 1 01:20:32.142 #undef SPDK_CONFIG_OCF 01:20:32.142 #define SPDK_CONFIG_OCF_PATH 01:20:32.142 #define SPDK_CONFIG_OPENSSL_PATH 01:20:32.142 #undef SPDK_CONFIG_PGO_CAPTURE 01:20:32.142 #define SPDK_CONFIG_PGO_DIR 01:20:32.142 #undef SPDK_CONFIG_PGO_USE 01:20:32.142 #define SPDK_CONFIG_PREFIX /usr/local 01:20:32.142 #undef SPDK_CONFIG_RAID5F 01:20:32.142 #undef SPDK_CONFIG_RBD 01:20:32.142 #define SPDK_CONFIG_RDMA 1 01:20:32.142 #define SPDK_CONFIG_RDMA_PROV verbs 01:20:32.142 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 01:20:32.142 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 01:20:32.142 #define SPDK_CONFIG_RDMA_SET_TOS 1 01:20:32.142 #define SPDK_CONFIG_SHARED 1 01:20:32.142 #undef SPDK_CONFIG_SMA 01:20:32.142 #define SPDK_CONFIG_TESTS 1 01:20:32.142 #undef SPDK_CONFIG_TSAN 01:20:32.142 #define SPDK_CONFIG_UBLK 1 01:20:32.142 #define SPDK_CONFIG_UBSAN 1 01:20:32.142 #undef SPDK_CONFIG_UNIT_TESTS 01:20:32.142 #undef SPDK_CONFIG_URING 01:20:32.142 #define SPDK_CONFIG_URING_PATH 01:20:32.142 #undef SPDK_CONFIG_URING_ZNS 01:20:32.142 #define SPDK_CONFIG_USDT 1 01:20:32.142 #undef SPDK_CONFIG_VBDEV_COMPRESS 01:20:32.142 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 01:20:32.142 #undef SPDK_CONFIG_VFIO_USER 01:20:32.143 #define SPDK_CONFIG_VFIO_USER_DIR 01:20:32.143 #define SPDK_CONFIG_VHOST 1 01:20:32.143 #define SPDK_CONFIG_VIRTIO 1 01:20:32.143 #undef SPDK_CONFIG_VTUNE 01:20:32.143 #define SPDK_CONFIG_VTUNE_DIR 01:20:32.143 #define SPDK_CONFIG_WERROR 1 01:20:32.143 #define SPDK_CONFIG_WPDK_DIR 01:20:32.143 #undef SPDK_CONFIG_XNVME 01:20:32.143 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 01:20:32.143 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 01:20:32.144 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 70791 ]] 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 70791 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.qB0kCx 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.qB0kCx/tests/target /tmp/spdk.qB0kCx 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13979185152 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5590175744 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6255362048 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=2486018048 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 01:20:32.145 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=20140032 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13979185152 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5590175744 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6265257984 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265397248 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=139264 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97315287040 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=2387492864 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 01:20:32.146 * Looking for test storage... 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/home 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=13979185152 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ /home == / ]] 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:20:32.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:20:32.146 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:20:32.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:32.146 --rc genhtml_branch_coverage=1 01:20:32.146 --rc genhtml_function_coverage=1 01:20:32.146 --rc genhtml_legend=1 01:20:32.147 --rc geninfo_all_blocks=1 01:20:32.147 --rc geninfo_unexecuted_blocks=1 01:20:32.147 01:20:32.147 ' 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:20:32.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:32.147 --rc genhtml_branch_coverage=1 01:20:32.147 --rc genhtml_function_coverage=1 01:20:32.147 --rc genhtml_legend=1 01:20:32.147 --rc geninfo_all_blocks=1 01:20:32.147 --rc geninfo_unexecuted_blocks=1 01:20:32.147 01:20:32.147 ' 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:20:32.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:32.147 --rc genhtml_branch_coverage=1 01:20:32.147 --rc genhtml_function_coverage=1 01:20:32.147 --rc genhtml_legend=1 01:20:32.147 --rc geninfo_all_blocks=1 01:20:32.147 --rc geninfo_unexecuted_blocks=1 01:20:32.147 01:20:32.147 ' 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:20:32.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:32.147 --rc genhtml_branch_coverage=1 01:20:32.147 --rc genhtml_function_coverage=1 01:20:32.147 --rc genhtml_legend=1 01:20:32.147 --rc geninfo_all_blocks=1 01:20:32.147 --rc geninfo_unexecuted_blocks=1 01:20:32.147 01:20:32.147 ' 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:20:32.147 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@460 -- # nvmf_veth_init 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:20:32.147 Cannot find device "nvmf_init_br" 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 01:20:32.147 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:20:32.147 Cannot find device "nvmf_init_br2" 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:20:32.148 Cannot find device "nvmf_tgt_br" 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:20:32.148 Cannot find device "nvmf_tgt_br2" 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:20:32.148 Cannot find device "nvmf_init_br" 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:20:32.148 Cannot find device "nvmf_init_br2" 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:20:32.148 Cannot find device "nvmf_tgt_br" 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:20:32.148 Cannot find device "nvmf_tgt_br2" 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:20:32.148 Cannot find device "nvmf_br" 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:20:32.148 Cannot find device "nvmf_init_if" 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:20:32.148 Cannot find device "nvmf_init_if2" 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:20:32.148 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:20:32.148 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:20:32.148 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:20:32.148 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 01:20:32.148 01:20:32.148 --- 10.0.0.3 ping statistics --- 01:20:32.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:32.148 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:20:32.148 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:20:32.148 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 01:20:32.148 01:20:32.148 --- 10.0.0.4 ping statistics --- 01:20:32.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:32.148 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:20:32.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:20:32.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 01:20:32.148 01:20:32.148 --- 10.0.0.1 ping statistics --- 01:20:32.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:32.148 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:20:32.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:20:32.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 01:20:32.148 01:20:32.148 --- 10.0.0.2 ping statistics --- 01:20:32.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:32.148 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@461 -- # return 0 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 01:20:32.148 ************************************ 01:20:32.148 START TEST nvmf_filesystem_no_in_capsule 01:20:32.148 ************************************ 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=70981 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 70981 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 70981 ']' 01:20:32.148 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:20:32.149 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:20:32.149 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 01:20:32.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:20:32.149 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:20:32.149 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 01:20:32.149 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:32.149 [2024-12-09 05:15:22.767151] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:20:32.149 [2024-12-09 05:15:22.767214] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:20:32.149 [2024-12-09 05:15:22.906508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:20:32.149 [2024-12-09 05:15:22.942009] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:20:32.149 [2024-12-09 05:15:22.942050] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:20:32.149 [2024-12-09 05:15:22.942057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:20:32.149 [2024-12-09 05:15:22.942062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:20:32.149 [2024-12-09 05:15:22.942067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:20:32.149 [2024-12-09 05:15:22.942724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:20:32.149 [2024-12-09 05:15:22.942782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:20:32.149 [2024-12-09 05:15:22.942813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:20:32.149 [2024-12-09 05:15:22.942815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:20:32.149 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:20:32.149 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 01:20:32.149 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:20:32.149 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 01:20:32.149 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:32.149 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:20:32.149 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 01:20:32.149 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 01:20:32.149 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:32.149 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:32.149 [2024-12-09 05:15:23.673821] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:20:32.149 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:32.149 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 01:20:32.149 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:32.149 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:32.408 Malloc1 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:32.408 [2024-12-09 05:15:23.796668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:32.408 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:20:32.408 { 01:20:32.408 "aliases": [ 01:20:32.408 "cc383449-f786-408d-b363-2e906280ecca" 01:20:32.408 ], 01:20:32.408 "assigned_rate_limits": { 01:20:32.408 "r_mbytes_per_sec": 0, 01:20:32.408 "rw_ios_per_sec": 0, 01:20:32.409 "rw_mbytes_per_sec": 0, 01:20:32.409 "w_mbytes_per_sec": 0 01:20:32.409 }, 01:20:32.409 "block_size": 512, 01:20:32.409 "claim_type": "exclusive_write", 01:20:32.409 "claimed": true, 01:20:32.409 "driver_specific": {}, 01:20:32.409 "memory_domains": [ 01:20:32.409 { 01:20:32.409 "dma_device_id": "system", 01:20:32.409 "dma_device_type": 1 01:20:32.409 }, 01:20:32.409 { 01:20:32.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:20:32.409 "dma_device_type": 2 01:20:32.409 } 01:20:32.409 ], 01:20:32.409 "name": "Malloc1", 01:20:32.409 "num_blocks": 1048576, 01:20:32.409 "product_name": "Malloc disk", 01:20:32.409 "supported_io_types": { 01:20:32.409 "abort": true, 01:20:32.409 "compare": false, 01:20:32.409 "compare_and_write": false, 01:20:32.409 "copy": true, 01:20:32.409 "flush": true, 01:20:32.409 "get_zone_info": false, 01:20:32.409 "nvme_admin": false, 01:20:32.409 "nvme_io": false, 01:20:32.409 "nvme_io_md": false, 01:20:32.409 "nvme_iov_md": false, 01:20:32.409 "read": true, 01:20:32.409 "reset": true, 01:20:32.409 "seek_data": false, 01:20:32.409 "seek_hole": false, 01:20:32.409 "unmap": true, 01:20:32.409 "write": true, 01:20:32.409 "write_zeroes": true, 01:20:32.409 "zcopy": true, 01:20:32.409 "zone_append": false, 01:20:32.409 "zone_management": false 01:20:32.409 }, 01:20:32.409 "uuid": "cc383449-f786-408d-b363-2e906280ecca", 01:20:32.409 "zoned": false 01:20:32.409 } 01:20:32.409 ]' 01:20:32.409 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:20:32.409 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 01:20:32.409 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:20:32.409 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 01:20:32.409 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 01:20:32.409 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 01:20:32.409 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 01:20:32.409 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:20:32.668 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 01:20:32.668 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 01:20:32.668 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:20:32.668 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:20:32.668 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 01:20:34.578 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:20:34.578 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:20:34.578 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:20:34.578 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:20:34.578 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:20:34.578 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 01:20:34.578 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 01:20:34.578 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 01:20:34.578 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 01:20:34.578 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 01:20:34.578 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 01:20:34.578 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 01:20:34.578 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 01:20:34.578 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 01:20:34.578 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 01:20:34.578 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 01:20:34.578 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 01:20:34.578 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 01:20:34.835 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 01:20:35.768 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 01:20:35.768 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 01:20:35.768 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:20:35.768 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:35.768 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:35.768 ************************************ 01:20:35.768 START TEST filesystem_ext4 01:20:35.768 ************************************ 01:20:35.768 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 01:20:35.768 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 01:20:35.768 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 01:20:35.768 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 01:20:35.768 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 01:20:35.768 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 01:20:35.768 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 01:20:35.768 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 01:20:35.768 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 01:20:35.768 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 01:20:35.768 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 01:20:35.768 mke2fs 1.47.0 (5-Feb-2023) 01:20:35.768 Discarding device blocks: 0/522240 done 01:20:35.768 Creating filesystem with 522240 1k blocks and 130560 inodes 01:20:35.768 Filesystem UUID: 4e32f55c-72aa-4387-a227-105e748d6852 01:20:35.768 Superblock backups stored on blocks: 01:20:35.768 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 01:20:35.768 01:20:35.768 Allocating group tables: 0/64 done 01:20:35.768 Writing inode tables: 0/64 done 01:20:35.768 Creating journal (8192 blocks): done 01:20:35.768 Writing superblocks and filesystem accounting information: 0/64 done 01:20:35.768 01:20:35.768 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 01:20:35.768 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 70981 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 01:20:42.341 01:20:42.341 real 0m5.539s 01:20:42.341 user 0m0.007s 01:20:42.341 sys 0m0.045s 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 01:20:42.341 ************************************ 01:20:42.341 END TEST filesystem_ext4 01:20:42.341 ************************************ 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:42.341 ************************************ 01:20:42.341 START TEST filesystem_btrfs 01:20:42.341 ************************************ 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 01:20:42.341 btrfs-progs v6.8.1 01:20:42.341 See https://btrfs.readthedocs.io for more information. 01:20:42.341 01:20:42.341 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 01:20:42.341 NOTE: several default settings have changed in version 5.15, please make sure 01:20:42.341 this does not affect your deployments: 01:20:42.341 - DUP for metadata (-m dup) 01:20:42.341 - enabled no-holes (-O no-holes) 01:20:42.341 - enabled free-space-tree (-R free-space-tree) 01:20:42.341 01:20:42.341 Label: (null) 01:20:42.341 UUID: 26da4f51-32e3-417e-8264-cd932f5e1912 01:20:42.341 Node size: 16384 01:20:42.341 Sector size: 4096 (CPU page size: 4096) 01:20:42.341 Filesystem size: 510.00MiB 01:20:42.341 Block group profiles: 01:20:42.341 Data: single 8.00MiB 01:20:42.341 Metadata: DUP 32.00MiB 01:20:42.341 System: DUP 8.00MiB 01:20:42.341 SSD detected: yes 01:20:42.341 Zoned device: no 01:20:42.341 Features: extref, skinny-metadata, no-holes, free-space-tree 01:20:42.341 Checksum: crc32c 01:20:42.341 Number of devices: 1 01:20:42.341 Devices: 01:20:42.341 ID SIZE PATH 01:20:42.341 1 510.00MiB /dev/nvme0n1p1 01:20:42.341 01:20:42.341 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 01:20:42.342 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 01:20:42.342 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 01:20:42.342 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 01:20:42.342 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 01:20:42.342 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 01:20:42.342 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 01:20:42.342 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 01:20:42.342 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 70981 01:20:42.342 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 01:20:42.342 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 01:20:42.342 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 01:20:42.342 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 01:20:42.342 01:20:42.342 real 0m0.212s 01:20:42.342 user 0m0.016s 01:20:42.342 sys 0m0.040s 01:20:42.342 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:42.342 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 01:20:42.342 ************************************ 01:20:42.342 END TEST filesystem_btrfs 01:20:42.342 ************************************ 01:20:42.342 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 01:20:42.342 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:20:42.342 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:42.342 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:42.342 ************************************ 01:20:42.342 START TEST filesystem_xfs 01:20:42.342 ************************************ 01:20:42.342 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 01:20:42.342 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 01:20:42.342 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 01:20:42.342 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 01:20:42.342 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 01:20:42.342 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 01:20:42.342 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 01:20:42.342 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 01:20:42.342 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 01:20:42.342 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 01:20:42.342 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 01:20:42.342 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 01:20:42.342 = sectsz=512 attr=2, projid32bit=1 01:20:42.342 = crc=1 finobt=1, sparse=1, rmapbt=0 01:20:42.342 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 01:20:42.342 data = bsize=4096 blocks=130560, imaxpct=25 01:20:42.342 = sunit=0 swidth=0 blks 01:20:42.342 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 01:20:42.342 log =internal log bsize=4096 blocks=16384, version=2 01:20:42.342 = sectsz=512 sunit=0 blks, lazy-count=1 01:20:42.342 realtime =none extsz=4096 blocks=0, rtextents=0 01:20:42.342 Discarding blocks...Done. 01:20:42.342 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 01:20:42.342 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 01:20:44.239 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 01:20:44.239 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 01:20:44.239 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 01:20:44.239 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 01:20:44.239 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 01:20:44.239 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 01:20:44.239 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 70981 01:20:44.239 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 01:20:44.239 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 01:20:44.239 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 01:20:44.239 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 01:20:44.239 01:20:44.239 real 0m2.817s 01:20:44.239 user 0m0.012s 01:20:44.239 sys 0m0.041s 01:20:44.239 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:44.239 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 01:20:44.239 ************************************ 01:20:44.239 END TEST filesystem_xfs 01:20:44.239 ************************************ 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:20:44.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 70981 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 70981 ']' 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 70981 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70981 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:20:44.497 killing process with pid 70981 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70981' 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 70981 01:20:44.497 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 70981 01:20:44.755 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 01:20:44.755 01:20:44.755 real 0m13.480s 01:20:44.755 user 0m52.174s 01:20:44.755 sys 0m1.329s 01:20:44.755 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:44.755 ************************************ 01:20:44.755 END TEST nvmf_filesystem_no_in_capsule 01:20:44.755 ************************************ 01:20:44.755 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:44.755 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 01:20:44.755 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:20:44.755 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:44.755 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 01:20:44.755 ************************************ 01:20:44.755 START TEST nvmf_filesystem_in_capsule 01:20:44.755 ************************************ 01:20:44.755 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 01:20:44.755 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 01:20:44.755 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 01:20:44.755 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:20:44.755 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 01:20:44.755 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:44.755 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=71342 01:20:44.755 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 71342 01:20:44.755 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 71342 ']' 01:20:44.755 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:20:44.755 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 01:20:44.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:20:44.755 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:20:44.755 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 01:20:44.755 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:44.755 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:20:44.755 [2024-12-09 05:15:36.283627] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:20:44.755 [2024-12-09 05:15:36.283688] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:20:45.059 [2024-12-09 05:15:36.425486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:20:45.059 [2024-12-09 05:15:36.460853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:20:45.059 [2024-12-09 05:15:36.460893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:20:45.059 [2024-12-09 05:15:36.460899] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:20:45.059 [2024-12-09 05:15:36.460904] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:20:45.059 [2024-12-09 05:15:36.460909] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:20:45.059 [2024-12-09 05:15:36.461580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:20:45.059 [2024-12-09 05:15:36.461628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:20:45.059 [2024-12-09 05:15:36.461712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:20:45.059 [2024-12-09 05:15:36.461723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:20:45.623 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:20:45.623 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 01:20:45.623 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:20:45.623 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 01:20:45.623 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:45.623 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:20:45.623 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 01:20:45.623 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 01:20:45.623 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:45.623 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:45.623 [2024-12-09 05:15:37.141181] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:20:45.623 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:45.623 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 01:20:45.623 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:45.624 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:45.624 Malloc1 01:20:45.624 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:45.624 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:20:45.624 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:45.624 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:45.880 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:45.880 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:20:45.880 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:45.880 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:45.880 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:45.880 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:20:45.880 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:45.880 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:45.880 [2024-12-09 05:15:37.252064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:20:45.880 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:45.880 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 01:20:45.880 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 01:20:45.880 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 01:20:45.880 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 01:20:45.880 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 01:20:45.880 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 01:20:45.880 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:45.880 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:45.881 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:45.881 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 01:20:45.881 { 01:20:45.881 "aliases": [ 01:20:45.881 "c83b3557-04ff-46a7-a1aa-4e927beb8280" 01:20:45.881 ], 01:20:45.881 "assigned_rate_limits": { 01:20:45.881 "r_mbytes_per_sec": 0, 01:20:45.881 "rw_ios_per_sec": 0, 01:20:45.881 "rw_mbytes_per_sec": 0, 01:20:45.881 "w_mbytes_per_sec": 0 01:20:45.881 }, 01:20:45.881 "block_size": 512, 01:20:45.881 "claim_type": "exclusive_write", 01:20:45.881 "claimed": true, 01:20:45.881 "driver_specific": {}, 01:20:45.881 "memory_domains": [ 01:20:45.881 { 01:20:45.881 "dma_device_id": "system", 01:20:45.881 "dma_device_type": 1 01:20:45.881 }, 01:20:45.881 { 01:20:45.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:20:45.881 "dma_device_type": 2 01:20:45.881 } 01:20:45.881 ], 01:20:45.881 "name": "Malloc1", 01:20:45.881 "num_blocks": 1048576, 01:20:45.881 "product_name": "Malloc disk", 01:20:45.881 "supported_io_types": { 01:20:45.881 "abort": true, 01:20:45.881 "compare": false, 01:20:45.881 "compare_and_write": false, 01:20:45.881 "copy": true, 01:20:45.881 "flush": true, 01:20:45.881 "get_zone_info": false, 01:20:45.881 "nvme_admin": false, 01:20:45.881 "nvme_io": false, 01:20:45.881 "nvme_io_md": false, 01:20:45.881 "nvme_iov_md": false, 01:20:45.881 "read": true, 01:20:45.881 "reset": true, 01:20:45.881 "seek_data": false, 01:20:45.881 "seek_hole": false, 01:20:45.881 "unmap": true, 01:20:45.881 "write": true, 01:20:45.881 "write_zeroes": true, 01:20:45.881 "zcopy": true, 01:20:45.881 "zone_append": false, 01:20:45.881 "zone_management": false 01:20:45.881 }, 01:20:45.881 "uuid": "c83b3557-04ff-46a7-a1aa-4e927beb8280", 01:20:45.881 "zoned": false 01:20:45.881 } 01:20:45.881 ]' 01:20:45.881 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 01:20:45.881 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 01:20:45.881 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 01:20:45.881 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 01:20:45.881 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 01:20:45.881 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 01:20:45.881 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 01:20:45.881 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:20:46.137 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 01:20:46.137 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 01:20:46.137 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:20:46.137 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:20:46.138 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 01:20:48.048 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:20:48.048 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:20:48.048 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:20:48.048 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:20:48.048 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:20:48.048 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 01:20:48.048 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 01:20:48.048 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 01:20:48.048 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 01:20:48.048 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 01:20:48.048 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 01:20:48.048 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 01:20:48.048 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 01:20:48.048 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 01:20:48.048 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 01:20:48.048 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 01:20:48.048 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 01:20:48.048 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 01:20:48.048 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 01:20:49.440 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 01:20:49.440 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 01:20:49.440 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:20:49.440 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:49.440 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:49.440 ************************************ 01:20:49.440 START TEST filesystem_in_capsule_ext4 01:20:49.440 ************************************ 01:20:49.440 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 01:20:49.440 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 01:20:49.440 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 01:20:49.440 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 01:20:49.440 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 01:20:49.440 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 01:20:49.440 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 01:20:49.440 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 01:20:49.440 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 01:20:49.440 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 01:20:49.440 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 01:20:49.440 mke2fs 1.47.0 (5-Feb-2023) 01:20:49.440 Discarding device blocks: 0/522240 done 01:20:49.440 Creating filesystem with 522240 1k blocks and 130560 inodes 01:20:49.440 Filesystem UUID: 6561c2c0-bb19-4295-bd27-09ae686d175c 01:20:49.440 Superblock backups stored on blocks: 01:20:49.440 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 01:20:49.440 01:20:49.440 Allocating group tables: 0/64 done 01:20:49.440 Writing inode tables: 0/64 done 01:20:49.440 Creating journal (8192 blocks): done 01:20:49.440 Writing superblocks and filesystem accounting information: 0/64 done 01:20:49.440 01:20:49.440 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 01:20:49.440 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 01:20:54.698 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 01:20:54.698 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 01:20:54.698 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 01:20:54.698 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 01:20:54.698 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 01:20:54.698 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 01:20:54.698 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 71342 01:20:54.698 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 01:20:54.698 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 01:20:54.698 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 01:20:54.698 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 01:20:54.698 ************************************ 01:20:54.698 END TEST filesystem_in_capsule_ext4 01:20:54.698 ************************************ 01:20:54.698 01:20:54.698 real 0m5.532s 01:20:54.698 user 0m0.022s 01:20:54.698 sys 0m0.048s 01:20:54.698 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:54.698 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 01:20:54.698 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 01:20:54.698 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:20:54.698 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:54.698 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:54.698 ************************************ 01:20:54.698 START TEST filesystem_in_capsule_btrfs 01:20:54.698 ************************************ 01:20:54.698 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 01:20:54.698 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 01:20:54.699 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 01:20:54.699 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 01:20:54.699 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 01:20:54.699 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 01:20:54.699 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 01:20:54.699 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 01:20:54.699 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 01:20:54.699 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 01:20:54.699 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 01:20:54.699 btrfs-progs v6.8.1 01:20:54.699 See https://btrfs.readthedocs.io for more information. 01:20:54.699 01:20:54.699 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 01:20:54.699 NOTE: several default settings have changed in version 5.15, please make sure 01:20:54.699 this does not affect your deployments: 01:20:54.699 - DUP for metadata (-m dup) 01:20:54.699 - enabled no-holes (-O no-holes) 01:20:54.699 - enabled free-space-tree (-R free-space-tree) 01:20:54.699 01:20:54.699 Label: (null) 01:20:54.699 UUID: 0138f4e8-50d7-476b-8ee7-b4d6c72da496 01:20:54.699 Node size: 16384 01:20:54.699 Sector size: 4096 (CPU page size: 4096) 01:20:54.699 Filesystem size: 510.00MiB 01:20:54.699 Block group profiles: 01:20:54.699 Data: single 8.00MiB 01:20:54.699 Metadata: DUP 32.00MiB 01:20:54.699 System: DUP 8.00MiB 01:20:54.699 SSD detected: yes 01:20:54.699 Zoned device: no 01:20:54.699 Features: extref, skinny-metadata, no-holes, free-space-tree 01:20:54.699 Checksum: crc32c 01:20:54.699 Number of devices: 1 01:20:54.699 Devices: 01:20:54.699 ID SIZE PATH 01:20:54.699 1 510.00MiB /dev/nvme0n1p1 01:20:54.699 01:20:54.699 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 01:20:54.699 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 71342 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 01:20:54.956 ************************************ 01:20:54.956 END TEST filesystem_in_capsule_btrfs 01:20:54.956 ************************************ 01:20:54.956 01:20:54.956 real 0m0.177s 01:20:54.956 user 0m0.013s 01:20:54.956 sys 0m0.048s 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:54.956 ************************************ 01:20:54.956 START TEST filesystem_in_capsule_xfs 01:20:54.956 ************************************ 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 01:20:54.956 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 01:20:54.956 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 01:20:54.956 = sectsz=512 attr=2, projid32bit=1 01:20:54.956 = crc=1 finobt=1, sparse=1, rmapbt=0 01:20:54.956 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 01:20:54.956 data = bsize=4096 blocks=130560, imaxpct=25 01:20:54.956 = sunit=0 swidth=0 blks 01:20:54.956 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 01:20:54.956 log =internal log bsize=4096 blocks=16384, version=2 01:20:54.956 = sectsz=512 sunit=0 blks, lazy-count=1 01:20:54.956 realtime =none extsz=4096 blocks=0, rtextents=0 01:20:55.589 Discarding blocks...Done. 01:20:55.589 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 01:20:55.589 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 01:20:57.528 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 01:20:57.528 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 01:20:57.528 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 01:20:57.528 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 01:20:57.528 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 01:20:57.528 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 01:20:57.528 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 71342 01:20:57.528 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 01:20:57.528 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 01:20:57.528 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 01:20:57.528 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 01:20:57.528 ************************************ 01:20:57.528 END TEST filesystem_in_capsule_xfs 01:20:57.528 ************************************ 01:20:57.528 01:20:57.528 real 0m2.577s 01:20:57.528 user 0m0.015s 01:20:57.528 sys 0m0.043s 01:20:57.528 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:57.528 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 01:20:57.528 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 01:20:57.528 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 01:20:57.528 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:20:57.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:20:57.528 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:20:57.528 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 01:20:57.528 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:20:57.528 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:20:57.528 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:20:57.528 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:20:57.528 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 01:20:57.528 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:20:57.528 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:57.528 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:57.528 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:57.528 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 01:20:57.528 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 71342 01:20:57.528 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 71342 ']' 01:20:57.528 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 71342 01:20:57.529 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 01:20:57.529 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:20:57.529 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71342 01:20:57.529 killing process with pid 71342 01:20:57.529 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:20:57.529 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:20:57.529 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71342' 01:20:57.529 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 71342 01:20:57.529 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 71342 01:20:57.793 ************************************ 01:20:57.793 END TEST nvmf_filesystem_in_capsule 01:20:57.793 ************************************ 01:20:57.793 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 01:20:57.793 01:20:57.793 real 0m13.080s 01:20:57.793 user 0m50.681s 01:20:57.793 sys 0m1.238s 01:20:57.793 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:57.793 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 01:20:57.793 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 01:20:57.793 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 01:20:57.793 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 01:20:57.793 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:20:57.793 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 01:20:57.793 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 01:20:57.793 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:20:57.793 rmmod nvme_tcp 01:20:58.050 rmmod nvme_fabrics 01:20:58.050 rmmod nvme_keyring 01:20:58.050 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:20:58.050 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 01:20:58.050 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 01:20:58.050 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 01:20:58.050 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:20:58.050 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:20:58.050 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:20:58.050 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 01:20:58.050 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 01:20:58.050 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:20:58.050 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 01:20:58.050 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:20:58.050 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:20:58.050 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:20:58.050 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:20:58.050 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:20:58.050 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:20:58.050 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:20:58.050 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:20:58.050 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:20:58.051 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:20:58.051 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:20:58.051 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:20:58.051 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:20:58.051 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:20:58.051 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:20:58.051 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns 01:20:58.051 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:20:58.051 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:20:58.051 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0 01:20:58.309 ************************************ 01:20:58.309 END TEST nvmf_filesystem 01:20:58.309 ************************************ 01:20:58.309 01:20:58.309 real 0m27.561s 01:20:58.309 user 1m43.159s 01:20:58.309 sys 0m2.965s 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:20:58.309 ************************************ 01:20:58.309 START TEST nvmf_target_discovery 01:20:58.309 ************************************ 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 01:20:58.309 * Looking for test storage... 01:20:58.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:20:58.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:58.309 --rc genhtml_branch_coverage=1 01:20:58.309 --rc genhtml_function_coverage=1 01:20:58.309 --rc genhtml_legend=1 01:20:58.309 --rc geninfo_all_blocks=1 01:20:58.309 --rc geninfo_unexecuted_blocks=1 01:20:58.309 01:20:58.309 ' 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:20:58.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:58.309 --rc genhtml_branch_coverage=1 01:20:58.309 --rc genhtml_function_coverage=1 01:20:58.309 --rc genhtml_legend=1 01:20:58.309 --rc geninfo_all_blocks=1 01:20:58.309 --rc geninfo_unexecuted_blocks=1 01:20:58.309 01:20:58.309 ' 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:20:58.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:58.309 --rc genhtml_branch_coverage=1 01:20:58.309 --rc genhtml_function_coverage=1 01:20:58.309 --rc genhtml_legend=1 01:20:58.309 --rc geninfo_all_blocks=1 01:20:58.309 --rc geninfo_unexecuted_blocks=1 01:20:58.309 01:20:58.309 ' 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:20:58.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:20:58.309 --rc genhtml_branch_coverage=1 01:20:58.309 --rc genhtml_function_coverage=1 01:20:58.309 --rc genhtml_legend=1 01:20:58.309 --rc geninfo_all_blocks=1 01:20:58.309 --rc geninfo_unexecuted_blocks=1 01:20:58.309 01:20:58.309 ' 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:20:58.309 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:20:58.310 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:20:58.310 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:20:58.568 Cannot find device "nvmf_init_br" 01:20:58.568 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 01:20:58.568 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:20:58.568 Cannot find device "nvmf_init_br2" 01:20:58.568 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 01:20:58.568 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:20:58.568 Cannot find device "nvmf_tgt_br" 01:20:58.568 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true 01:20:58.568 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:20:58.568 Cannot find device "nvmf_tgt_br2" 01:20:58.568 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true 01:20:58.568 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:20:58.568 Cannot find device "nvmf_init_br" 01:20:58.568 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 01:20:58.568 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:20:58.568 Cannot find device "nvmf_init_br2" 01:20:58.568 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 01:20:58.568 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:20:58.568 Cannot find device "nvmf_tgt_br" 01:20:58.568 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true 01:20:58.568 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:20:58.568 Cannot find device "nvmf_tgt_br2" 01:20:58.568 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true 01:20:58.568 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:20:58.568 Cannot find device "nvmf_br" 01:20:58.568 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true 01:20:58.568 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:20:58.568 Cannot find device "nvmf_init_if" 01:20:58.568 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true 01:20:58.568 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:20:58.568 Cannot find device "nvmf_init_if2" 01:20:58.568 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true 01:20:58.568 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:20:58.569 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:20:58.569 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:20:58.569 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:20:58.827 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:20:58.827 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 01:20:58.827 01:20:58.827 --- 10.0.0.3 ping statistics --- 01:20:58.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:58.827 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:20:58.827 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:20:58.827 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 01:20:58.827 01:20:58.827 --- 10.0.0.4 ping statistics --- 01:20:58.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:58.827 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:20:58.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:20:58.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 01:20:58.827 01:20:58.827 --- 10.0.0.1 ping statistics --- 01:20:58.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:58.827 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:20:58.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:20:58.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 01:20:58.827 01:20:58.827 --- 10.0.0.2 ping statistics --- 01:20:58.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:20:58.827 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@461 -- # return 0 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=71926 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 71926 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 71926 ']' 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:20:58.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 01:20:58.827 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:58.827 [2024-12-09 05:15:50.263920] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:20:58.827 [2024-12-09 05:15:50.263982] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:20:58.827 [2024-12-09 05:15:50.397792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:20:58.827 [2024-12-09 05:15:50.434466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:20:58.827 [2024-12-09 05:15:50.434509] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:20:58.827 [2024-12-09 05:15:50.434515] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:20:58.827 [2024-12-09 05:15:50.434520] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:20:58.827 [2024-12-09 05:15:50.434525] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:20:58.827 [2024-12-09 05:15:50.435192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:20:58.827 [2024-12-09 05:15:50.435347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:20:58.827 [2024-12-09 05:15:50.435248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:20:59.085 [2024-12-09 05:15:50.435327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.651 [2024-12-09 05:15:51.175416] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.651 Null1 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.651 [2024-12-09 05:15:51.219551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.651 Null2 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.651 Null3 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.651 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.910 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.910 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 01:20:59.910 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.910 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.910 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.910 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 01:20:59.910 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.910 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.910 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.910 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 01:20:59.910 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 01:20:59.910 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.910 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.910 Null4 01:20:59.910 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.910 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 01:20:59.910 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.910 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.910 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.910 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 01:20:59.910 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.910 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.910 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.911 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 01:20:59.911 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.911 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.911 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.911 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:20:59.911 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.911 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.911 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.911 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430 01:20:59.911 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.911 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.911 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.911 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -a 10.0.0.3 -s 4420 01:20:59.911 01:20:59.911 Discovery Log Number of Records 6, Generation counter 6 01:20:59.911 =====Discovery Log Entry 0====== 01:20:59.911 trtype: tcp 01:20:59.911 adrfam: ipv4 01:20:59.911 subtype: current discovery subsystem 01:20:59.911 treq: not required 01:20:59.911 portid: 0 01:20:59.911 trsvcid: 4420 01:20:59.911 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:20:59.911 traddr: 10.0.0.3 01:20:59.911 eflags: explicit discovery connections, duplicate discovery information 01:20:59.911 sectype: none 01:20:59.911 =====Discovery Log Entry 1====== 01:20:59.911 trtype: tcp 01:20:59.911 adrfam: ipv4 01:20:59.911 subtype: nvme subsystem 01:20:59.911 treq: not required 01:20:59.911 portid: 0 01:20:59.911 trsvcid: 4420 01:20:59.911 subnqn: nqn.2016-06.io.spdk:cnode1 01:20:59.911 traddr: 10.0.0.3 01:20:59.911 eflags: none 01:20:59.911 sectype: none 01:20:59.911 =====Discovery Log Entry 2====== 01:20:59.911 trtype: tcp 01:20:59.911 adrfam: ipv4 01:20:59.911 subtype: nvme subsystem 01:20:59.911 treq: not required 01:20:59.911 portid: 0 01:20:59.911 trsvcid: 4420 01:20:59.911 subnqn: nqn.2016-06.io.spdk:cnode2 01:20:59.911 traddr: 10.0.0.3 01:20:59.911 eflags: none 01:20:59.911 sectype: none 01:20:59.911 =====Discovery Log Entry 3====== 01:20:59.911 trtype: tcp 01:20:59.911 adrfam: ipv4 01:20:59.911 subtype: nvme subsystem 01:20:59.911 treq: not required 01:20:59.911 portid: 0 01:20:59.911 trsvcid: 4420 01:20:59.911 subnqn: nqn.2016-06.io.spdk:cnode3 01:20:59.911 traddr: 10.0.0.3 01:20:59.911 eflags: none 01:20:59.911 sectype: none 01:20:59.911 =====Discovery Log Entry 4====== 01:20:59.911 trtype: tcp 01:20:59.911 adrfam: ipv4 01:20:59.911 subtype: nvme subsystem 01:20:59.911 treq: not required 01:20:59.911 portid: 0 01:20:59.911 trsvcid: 4420 01:20:59.911 subnqn: nqn.2016-06.io.spdk:cnode4 01:20:59.911 traddr: 10.0.0.3 01:20:59.911 eflags: none 01:20:59.911 sectype: none 01:20:59.911 =====Discovery Log Entry 5====== 01:20:59.911 trtype: tcp 01:20:59.911 adrfam: ipv4 01:20:59.911 subtype: discovery subsystem referral 01:20:59.911 treq: not required 01:20:59.911 portid: 0 01:20:59.911 trsvcid: 4430 01:20:59.911 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:20:59.911 traddr: 10.0.0.3 01:20:59.911 eflags: none 01:20:59.911 sectype: none 01:20:59.911 Perform nvmf subsystem discovery via RPC 01:20:59.911 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 01:20:59.911 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 01:20:59.911 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.911 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.911 [ 01:20:59.911 { 01:20:59.911 "allow_any_host": true, 01:20:59.911 "hosts": [], 01:20:59.911 "listen_addresses": [ 01:20:59.911 { 01:20:59.911 "adrfam": "IPv4", 01:20:59.911 "traddr": "10.0.0.3", 01:20:59.911 "trsvcid": "4420", 01:20:59.911 "trtype": "TCP" 01:20:59.911 } 01:20:59.911 ], 01:20:59.911 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:20:59.911 "subtype": "Discovery" 01:20:59.911 }, 01:20:59.911 { 01:20:59.911 "allow_any_host": true, 01:20:59.911 "hosts": [], 01:20:59.911 "listen_addresses": [ 01:20:59.911 { 01:20:59.911 "adrfam": "IPv4", 01:20:59.911 "traddr": "10.0.0.3", 01:20:59.911 "trsvcid": "4420", 01:20:59.911 "trtype": "TCP" 01:20:59.911 } 01:20:59.911 ], 01:20:59.911 "max_cntlid": 65519, 01:20:59.911 "max_namespaces": 32, 01:20:59.911 "min_cntlid": 1, 01:20:59.911 "model_number": "SPDK bdev Controller", 01:20:59.911 "namespaces": [ 01:20:59.911 { 01:20:59.911 "bdev_name": "Null1", 01:20:59.911 "name": "Null1", 01:20:59.911 "nguid": "5B55623F511747CCB13FC25A8B34B81E", 01:20:59.911 "nsid": 1, 01:20:59.911 "uuid": "5b55623f-5117-47cc-b13f-c25a8b34b81e" 01:20:59.911 } 01:20:59.911 ], 01:20:59.911 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:20:59.911 "serial_number": "SPDK00000000000001", 01:20:59.911 "subtype": "NVMe" 01:20:59.911 }, 01:20:59.911 { 01:20:59.911 "allow_any_host": true, 01:20:59.911 "hosts": [], 01:20:59.911 "listen_addresses": [ 01:20:59.911 { 01:20:59.911 "adrfam": "IPv4", 01:20:59.911 "traddr": "10.0.0.3", 01:20:59.911 "trsvcid": "4420", 01:20:59.912 "trtype": "TCP" 01:20:59.912 } 01:20:59.912 ], 01:20:59.912 "max_cntlid": 65519, 01:20:59.912 "max_namespaces": 32, 01:20:59.912 "min_cntlid": 1, 01:20:59.912 "model_number": "SPDK bdev Controller", 01:20:59.912 "namespaces": [ 01:20:59.912 { 01:20:59.912 "bdev_name": "Null2", 01:20:59.912 "name": "Null2", 01:20:59.912 "nguid": "5907979BF1004DFABF4CF28C50275DAE", 01:20:59.912 "nsid": 1, 01:20:59.912 "uuid": "5907979b-f100-4dfa-bf4c-f28c50275dae" 01:20:59.912 } 01:20:59.912 ], 01:20:59.912 "nqn": "nqn.2016-06.io.spdk:cnode2", 01:20:59.912 "serial_number": "SPDK00000000000002", 01:20:59.912 "subtype": "NVMe" 01:20:59.912 }, 01:20:59.912 { 01:20:59.912 "allow_any_host": true, 01:20:59.912 "hosts": [], 01:20:59.912 "listen_addresses": [ 01:20:59.912 { 01:20:59.912 "adrfam": "IPv4", 01:20:59.912 "traddr": "10.0.0.3", 01:20:59.912 "trsvcid": "4420", 01:20:59.912 "trtype": "TCP" 01:20:59.912 } 01:20:59.912 ], 01:20:59.912 "max_cntlid": 65519, 01:20:59.912 "max_namespaces": 32, 01:20:59.912 "min_cntlid": 1, 01:20:59.912 "model_number": "SPDK bdev Controller", 01:20:59.912 "namespaces": [ 01:20:59.912 { 01:20:59.912 "bdev_name": "Null3", 01:20:59.912 "name": "Null3", 01:20:59.912 "nguid": "76D1122744234DC3BCEA23AF2DC31232", 01:20:59.912 "nsid": 1, 01:20:59.912 "uuid": "76d11227-4423-4dc3-bcea-23af2dc31232" 01:20:59.912 } 01:20:59.912 ], 01:20:59.912 "nqn": "nqn.2016-06.io.spdk:cnode3", 01:20:59.912 "serial_number": "SPDK00000000000003", 01:20:59.912 "subtype": "NVMe" 01:20:59.912 }, 01:20:59.912 { 01:20:59.912 "allow_any_host": true, 01:20:59.912 "hosts": [], 01:20:59.912 "listen_addresses": [ 01:20:59.912 { 01:20:59.912 "adrfam": "IPv4", 01:20:59.912 "traddr": "10.0.0.3", 01:20:59.912 "trsvcid": "4420", 01:20:59.912 "trtype": "TCP" 01:20:59.912 } 01:20:59.912 ], 01:20:59.912 "max_cntlid": 65519, 01:20:59.912 "max_namespaces": 32, 01:20:59.912 "min_cntlid": 1, 01:20:59.912 "model_number": "SPDK bdev Controller", 01:20:59.912 "namespaces": [ 01:20:59.912 { 01:20:59.912 "bdev_name": "Null4", 01:20:59.912 "name": "Null4", 01:20:59.912 "nguid": "4CAE19607DD44B11B9765673B480D9AF", 01:20:59.912 "nsid": 1, 01:20:59.912 "uuid": "4cae1960-7dd4-4b11-b976-5673b480d9af" 01:20:59.912 } 01:20:59.912 ], 01:20:59.912 "nqn": "nqn.2016-06.io.spdk:cnode4", 01:20:59.912 "serial_number": "SPDK00000000000004", 01:20:59.912 "subtype": "NVMe" 01:20:59.912 } 01:20:59.912 ] 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:20:59.912 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.913 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430 01:20:59.913 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.913 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:21:00.172 rmmod nvme_tcp 01:21:00.172 rmmod nvme_fabrics 01:21:00.172 rmmod nvme_keyring 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 71926 ']' 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 71926 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 71926 ']' 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 71926 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71926 01:21:00.172 killing process with pid 71926 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71926' 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 71926 01:21:00.172 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 71926 01:21:00.430 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:21:00.430 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:21:00.430 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:21:00.430 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 01:21:00.430 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 01:21:00.430 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:21:00.430 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 01:21:00.430 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:21:00.430 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:21:00.430 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:21:00.430 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:21:00.430 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:21:00.430 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:21:00.430 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:21:00.430 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:21:00.430 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:21:00.430 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:21:00.430 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:21:00.430 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:21:00.430 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:21:00.430 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:21:00.430 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:21:00.430 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 01:21:00.430 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:21:00.430 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:21:00.430 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0 01:21:00.689 01:21:00.689 real 0m2.319s 01:21:00.689 user 0m5.776s 01:21:00.689 sys 0m0.557s 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 01:21:00.689 ************************************ 01:21:00.689 END TEST nvmf_target_discovery 01:21:00.689 ************************************ 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:21:00.689 ************************************ 01:21:00.689 START TEST nvmf_referrals 01:21:00.689 ************************************ 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 01:21:00.689 * Looking for test storage... 01:21:00.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 01:21:00.689 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:21:00.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:00.690 --rc genhtml_branch_coverage=1 01:21:00.690 --rc genhtml_function_coverage=1 01:21:00.690 --rc genhtml_legend=1 01:21:00.690 --rc geninfo_all_blocks=1 01:21:00.690 --rc geninfo_unexecuted_blocks=1 01:21:00.690 01:21:00.690 ' 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:21:00.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:00.690 --rc genhtml_branch_coverage=1 01:21:00.690 --rc genhtml_function_coverage=1 01:21:00.690 --rc genhtml_legend=1 01:21:00.690 --rc geninfo_all_blocks=1 01:21:00.690 --rc geninfo_unexecuted_blocks=1 01:21:00.690 01:21:00.690 ' 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:21:00.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:00.690 --rc genhtml_branch_coverage=1 01:21:00.690 --rc genhtml_function_coverage=1 01:21:00.690 --rc genhtml_legend=1 01:21:00.690 --rc geninfo_all_blocks=1 01:21:00.690 --rc geninfo_unexecuted_blocks=1 01:21:00.690 01:21:00.690 ' 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:21:00.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:00.690 --rc genhtml_branch_coverage=1 01:21:00.690 --rc genhtml_function_coverage=1 01:21:00.690 --rc genhtml_legend=1 01:21:00.690 --rc geninfo_all_blocks=1 01:21:00.690 --rc geninfo_unexecuted_blocks=1 01:21:00.690 01:21:00.690 ' 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 01:21:00.690 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:21:00.691 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@460 -- # nvmf_veth_init 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:21:00.691 Cannot find device "nvmf_init_br" 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:21:00.691 Cannot find device "nvmf_init_br2" 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:21:00.691 Cannot find device "nvmf_tgt_br" 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:21:00.691 Cannot find device "nvmf_tgt_br2" 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:21:00.691 Cannot find device "nvmf_init_br" 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:21:00.691 Cannot find device "nvmf_init_br2" 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true 01:21:00.691 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:21:00.950 Cannot find device "nvmf_tgt_br" 01:21:00.950 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true 01:21:00.950 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:21:00.950 Cannot find device "nvmf_tgt_br2" 01:21:00.950 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true 01:21:00.950 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:21:00.950 Cannot find device "nvmf_br" 01:21:00.950 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true 01:21:00.950 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:21:00.950 Cannot find device "nvmf_init_if" 01:21:00.950 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true 01:21:00.950 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:21:00.950 Cannot find device "nvmf_init_if2" 01:21:00.950 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true 01:21:00.950 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:21:00.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:21:00.950 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true 01:21:00.950 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:21:00.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:21:00.950 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:21:00.951 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:21:00.951 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 01:21:00.951 01:21:00.951 --- 10.0.0.3 ping statistics --- 01:21:00.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:00.951 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:21:00.951 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:21:00.951 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 01:21:00.951 01:21:00.951 --- 10.0.0.4 ping statistics --- 01:21:00.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:00.951 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:21:00.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:21:00.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 01:21:00.951 01:21:00.951 --- 10.0.0.1 ping statistics --- 01:21:00.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:00.951 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:21:00.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:21:00.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 01:21:00.951 01:21:00.951 --- 10.0.0.2 ping statistics --- 01:21:00.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:00.951 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@461 -- # return 0 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=72197 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 72197 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 72197 ']' 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:00.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:00.951 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:21:01.209 [2024-12-09 05:15:52.586143] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:21:01.209 [2024-12-09 05:15:52.586191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:21:01.209 [2024-12-09 05:15:52.725495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:21:01.209 [2024-12-09 05:15:52.760862] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:21:01.209 [2024-12-09 05:15:52.760897] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:21:01.209 [2024-12-09 05:15:52.760904] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:21:01.209 [2024-12-09 05:15:52.760909] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:21:01.209 [2024-12-09 05:15:52.760913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:21:01.209 [2024-12-09 05:15:52.761607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:21:01.209 [2024-12-09 05:15:52.761673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:21:01.209 [2024-12-09 05:15:52.761772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:21:01.209 [2024-12-09 05:15:52.761775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:21:02.142 [2024-12-09 05:15:53.457985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:21:02.142 [2024-12-09 05:15:53.470078] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:02.142 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -a 10.0.0.3 -s 8009 -o json 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -a 10.0.0.3 -s 8009 -o json 01:21:02.143 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -a 10.0.0.3 -s 8009 -o json 01:21:02.400 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 01:21:02.401 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 01:21:02.658 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 01:21:02.658 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 01:21:02.658 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 01:21:02.658 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 01:21:02.658 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 01:21:02.658 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 01:21:02.658 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -a 10.0.0.3 -s 8009 -o json 01:21:02.658 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 01:21:02.658 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 01:21:02.658 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 01:21:02.658 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 01:21:02.658 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -a 10.0.0.3 -s 8009 -o json 01:21:02.658 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 01:21:02.658 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 01:21:02.658 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 01:21:02.659 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:02.659 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -a 10.0.0.3 -s 8009 -o json 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -a 10.0.0.3 -s 8009 -o json 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 01:21:02.917 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 01:21:03.175 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 01:21:03.175 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 01:21:03.175 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 01:21:03.175 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -a 10.0.0.3 -s 8009 -o json 01:21:03.175 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 01:21:03.175 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 01:21:03.175 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 01:21:03.175 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:03.175 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:21:03.175 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:03.175 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 01:21:03.175 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:03.175 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:21:03.175 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 01:21:03.175 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:03.175 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 01:21:03.175 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 01:21:03.175 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 01:21:03.175 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 01:21:03.175 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -a 10.0.0.3 -s 8009 -o json 01:21:03.175 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 01:21:03.175 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:21:03.433 rmmod nvme_tcp 01:21:03.433 rmmod nvme_fabrics 01:21:03.433 rmmod nvme_keyring 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 72197 ']' 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 72197 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 72197 ']' 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 72197 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72197 01:21:03.433 killing process with pid 72197 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72197' 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 72197 01:21:03.433 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 72197 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:21:03.689 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0 01:21:03.946 01:21:03.946 real 0m3.225s 01:21:03.946 user 0m9.892s 01:21:03.946 sys 0m0.746s 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:03.946 ************************************ 01:21:03.946 END TEST nvmf_referrals 01:21:03.946 ************************************ 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:21:03.946 ************************************ 01:21:03.946 START TEST nvmf_connect_disconnect 01:21:03.946 ************************************ 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 01:21:03.946 * Looking for test storage... 01:21:03.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:21:03.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:03.946 --rc genhtml_branch_coverage=1 01:21:03.946 --rc genhtml_function_coverage=1 01:21:03.946 --rc genhtml_legend=1 01:21:03.946 --rc geninfo_all_blocks=1 01:21:03.946 --rc geninfo_unexecuted_blocks=1 01:21:03.946 01:21:03.946 ' 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:21:03.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:03.946 --rc genhtml_branch_coverage=1 01:21:03.946 --rc genhtml_function_coverage=1 01:21:03.946 --rc genhtml_legend=1 01:21:03.946 --rc geninfo_all_blocks=1 01:21:03.946 --rc geninfo_unexecuted_blocks=1 01:21:03.946 01:21:03.946 ' 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:21:03.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:03.946 --rc genhtml_branch_coverage=1 01:21:03.946 --rc genhtml_function_coverage=1 01:21:03.946 --rc genhtml_legend=1 01:21:03.946 --rc geninfo_all_blocks=1 01:21:03.946 --rc geninfo_unexecuted_blocks=1 01:21:03.946 01:21:03.946 ' 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:21:03.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:03.946 --rc genhtml_branch_coverage=1 01:21:03.946 --rc genhtml_function_coverage=1 01:21:03.946 --rc genhtml_legend=1 01:21:03.946 --rc geninfo_all_blocks=1 01:21:03.946 --rc geninfo_unexecuted_blocks=1 01:21:03.946 01:21:03.946 ' 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:21:03.946 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:21:03.947 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@460 -- # nvmf_veth_init 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:21:03.947 Cannot find device "nvmf_init_br" 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:21:03.947 Cannot find device "nvmf_init_br2" 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:21:03.947 Cannot find device "nvmf_tgt_br" 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:21:03.947 Cannot find device "nvmf_tgt_br2" 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true 01:21:03.947 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:21:04.205 Cannot find device "nvmf_init_br" 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:21:04.205 Cannot find device "nvmf_init_br2" 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:21:04.205 Cannot find device "nvmf_tgt_br" 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:21:04.205 Cannot find device "nvmf_tgt_br2" 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:21:04.205 Cannot find device "nvmf_br" 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:21:04.205 Cannot find device "nvmf_init_if" 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:21:04.205 Cannot find device "nvmf_init_if2" 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:21:04.205 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:21:04.205 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:21:04.205 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:21:04.205 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 01:21:04.205 01:21:04.205 --- 10.0.0.3 ping statistics --- 01:21:04.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:04.205 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:21:04.205 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:21:04.205 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 01:21:04.205 01:21:04.205 --- 10.0.0.4 ping statistics --- 01:21:04.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:04.205 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:21:04.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:21:04.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 01:21:04.205 01:21:04.205 --- 10.0.0.1 ping statistics --- 01:21:04.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:04.205 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:21:04.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:21:04.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 01:21:04.205 01:21:04.205 --- 10.0.0.2 ping statistics --- 01:21:04.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:04.205 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 01:21:04.205 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:21:04.206 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@461 -- # return 0 01:21:04.206 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:21:04.206 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:21:04.206 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:21:04.206 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:21:04.206 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:21:04.206 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:21:04.206 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:21:04.206 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 01:21:04.206 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:21:04.206 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 01:21:04.206 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 01:21:04.463 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=72551 01:21:04.463 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 72551 01:21:04.463 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 72551 ']' 01:21:04.463 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:04.463 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:21:04.463 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:04.463 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:04.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:04.463 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:04.463 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 01:21:04.463 [2024-12-09 05:15:55.860477] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:21:04.463 [2024-12-09 05:15:55.860535] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:21:04.463 [2024-12-09 05:15:56.002627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:21:04.463 [2024-12-09 05:15:56.038303] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:21:04.463 [2024-12-09 05:15:56.038342] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:21:04.463 [2024-12-09 05:15:56.038349] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:21:04.463 [2024-12-09 05:15:56.038354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:21:04.463 [2024-12-09 05:15:56.038359] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:21:04.463 [2024-12-09 05:15:56.039031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:21:04.463 [2024-12-09 05:15:56.039093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:21:04.463 [2024-12-09 05:15:56.039173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:21:04.463 [2024-12-09 05:15:56.039161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 01:21:05.398 [2024-12-09 05:15:56.779126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 01:21:05.398 [2024-12-09 05:15:56.845354] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 01:21:05.398 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 01:21:07.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:21:09.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:21:12.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:21:14.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:21:16.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:21:16.778 rmmod nvme_tcp 01:21:16.778 rmmod nvme_fabrics 01:21:16.778 rmmod nvme_keyring 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 72551 ']' 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 72551 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 72551 ']' 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 72551 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72551 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:21:16.778 killing process with pid 72551 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72551' 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 72551 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 72551 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:21:16.778 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:21:17.036 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:21:17.036 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:21:17.036 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:21:17.036 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:21:17.036 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns 01:21:17.036 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:21:17.036 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:21:17.036 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:21:17.036 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0 01:21:17.036 01:21:17.036 real 0m13.163s 01:21:17.036 user 0m48.417s 01:21:17.036 sys 0m1.339s 01:21:17.036 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:17.036 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 01:21:17.036 ************************************ 01:21:17.036 END TEST nvmf_connect_disconnect 01:21:17.036 ************************************ 01:21:17.036 05:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 01:21:17.036 05:16:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:21:17.036 05:16:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:17.036 05:16:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:21:17.036 ************************************ 01:21:17.036 START TEST nvmf_multitarget 01:21:17.036 ************************************ 01:21:17.036 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 01:21:17.036 * Looking for test storage... 01:21:17.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:21:17.036 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:21:17.036 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 01:21:17.036 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:21:17.294 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:21:17.294 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:21:17.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:17.295 --rc genhtml_branch_coverage=1 01:21:17.295 --rc genhtml_function_coverage=1 01:21:17.295 --rc genhtml_legend=1 01:21:17.295 --rc geninfo_all_blocks=1 01:21:17.295 --rc geninfo_unexecuted_blocks=1 01:21:17.295 01:21:17.295 ' 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:21:17.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:17.295 --rc genhtml_branch_coverage=1 01:21:17.295 --rc genhtml_function_coverage=1 01:21:17.295 --rc genhtml_legend=1 01:21:17.295 --rc geninfo_all_blocks=1 01:21:17.295 --rc geninfo_unexecuted_blocks=1 01:21:17.295 01:21:17.295 ' 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:21:17.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:17.295 --rc genhtml_branch_coverage=1 01:21:17.295 --rc genhtml_function_coverage=1 01:21:17.295 --rc genhtml_legend=1 01:21:17.295 --rc geninfo_all_blocks=1 01:21:17.295 --rc geninfo_unexecuted_blocks=1 01:21:17.295 01:21:17.295 ' 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:21:17.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:17.295 --rc genhtml_branch_coverage=1 01:21:17.295 --rc genhtml_function_coverage=1 01:21:17.295 --rc genhtml_legend=1 01:21:17.295 --rc geninfo_all_blocks=1 01:21:17.295 --rc geninfo_unexecuted_blocks=1 01:21:17.295 01:21:17.295 ' 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:21:17.295 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:21:17.295 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@460 -- # nvmf_veth_init 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:21:17.296 Cannot find device "nvmf_init_br" 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:21:17.296 Cannot find device "nvmf_init_br2" 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:21:17.296 Cannot find device "nvmf_tgt_br" 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:21:17.296 Cannot find device "nvmf_tgt_br2" 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:21:17.296 Cannot find device "nvmf_init_br" 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:21:17.296 Cannot find device "nvmf_init_br2" 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:21:17.296 Cannot find device "nvmf_tgt_br" 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:21:17.296 Cannot find device "nvmf_tgt_br2" 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:21:17.296 Cannot find device "nvmf_br" 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:21:17.296 Cannot find device "nvmf_init_if" 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:21:17.296 Cannot find device "nvmf_init_if2" 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:21:17.296 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:21:17.296 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:21:17.296 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:21:17.554 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:21:17.554 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 01:21:17.554 01:21:17.554 --- 10.0.0.3 ping statistics --- 01:21:17.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:17.554 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:21:17.554 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:21:17.554 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.023 ms 01:21:17.554 01:21:17.554 --- 10.0.0.4 ping statistics --- 01:21:17.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:17.554 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:21:17.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:21:17.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.014 ms 01:21:17.554 01:21:17.554 --- 10.0.0.1 ping statistics --- 01:21:17.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:17.554 rtt min/avg/max/mdev = 0.014/0.014/0.014/0.000 ms 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:21:17.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:21:17.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 01:21:17.554 01:21:17.554 --- 10.0.0.2 ping statistics --- 01:21:17.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:17.554 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@461 -- # return 0 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:21:17.554 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:21:17.554 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 01:21:17.554 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:21:17.554 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 01:21:17.554 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 01:21:17.554 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=73005 01:21:17.554 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 73005 01:21:17.554 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 73005 ']' 01:21:17.554 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:17.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:17.554 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:17.554 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:17.554 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:17.554 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 01:21:17.554 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:21:17.554 [2024-12-09 05:16:09.056129] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:21:17.554 [2024-12-09 05:16:09.056192] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:21:17.812 [2024-12-09 05:16:09.198176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:21:17.812 [2024-12-09 05:16:09.233257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:21:17.812 [2024-12-09 05:16:09.233297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:21:17.812 [2024-12-09 05:16:09.233304] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:21:17.812 [2024-12-09 05:16:09.233309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:21:17.812 [2024-12-09 05:16:09.233314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:21:17.812 [2024-12-09 05:16:09.234005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:21:17.812 [2024-12-09 05:16:09.234096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:21:17.812 [2024-12-09 05:16:09.234583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:21:17.812 [2024-12-09 05:16:09.234588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:21:18.376 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:21:18.376 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 01:21:18.376 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:21:18.376 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 01:21:18.376 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 01:21:18.376 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:21:18.376 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 01:21:18.376 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 01:21:18.376 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 01:21:18.633 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 01:21:18.633 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 01:21:18.633 "nvmf_tgt_1" 01:21:18.633 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 01:21:18.633 "nvmf_tgt_2" 01:21:18.890 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 01:21:18.890 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 01:21:18.890 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 01:21:18.890 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 01:21:18.890 true 01:21:18.890 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 01:21:19.147 true 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:21:19.147 rmmod nvme_tcp 01:21:19.147 rmmod nvme_fabrics 01:21:19.147 rmmod nvme_keyring 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 73005 ']' 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 73005 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 73005 ']' 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 73005 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73005 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73005' 01:21:19.147 killing process with pid 73005 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 73005 01:21:19.147 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 73005 01:21:19.404 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:21:19.404 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:21:19.404 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:21:19.404 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 01:21:19.404 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 01:21:19.404 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:21:19.404 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 01:21:19.404 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:21:19.404 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:21:19.404 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:21:19.404 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:21:19.404 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:21:19.404 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:21:19.404 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:21:19.404 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:21:19.404 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:21:19.404 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:21:19.404 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:21:19.404 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:21:19.664 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:21:19.664 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:21:19.664 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:21:19.664 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns 01:21:19.664 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:21:19.664 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:21:19.664 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:21:19.664 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0 01:21:19.664 01:21:19.664 real 0m2.571s 01:21:19.664 user 0m7.509s 01:21:19.664 sys 0m0.544s 01:21:19.664 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:19.664 ************************************ 01:21:19.664 END TEST nvmf_multitarget 01:21:19.664 ************************************ 01:21:19.664 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 01:21:19.664 05:16:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 01:21:19.664 05:16:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:21:19.664 05:16:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:19.664 05:16:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:21:19.664 ************************************ 01:21:19.664 START TEST nvmf_rpc 01:21:19.664 ************************************ 01:21:19.664 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 01:21:19.664 * Looking for test storage... 01:21:19.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:21:19.664 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:21:19.664 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 01:21:19.664 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:21:19.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:19.924 --rc genhtml_branch_coverage=1 01:21:19.924 --rc genhtml_function_coverage=1 01:21:19.924 --rc genhtml_legend=1 01:21:19.924 --rc geninfo_all_blocks=1 01:21:19.924 --rc geninfo_unexecuted_blocks=1 01:21:19.924 01:21:19.924 ' 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:21:19.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:19.924 --rc genhtml_branch_coverage=1 01:21:19.924 --rc genhtml_function_coverage=1 01:21:19.924 --rc genhtml_legend=1 01:21:19.924 --rc geninfo_all_blocks=1 01:21:19.924 --rc geninfo_unexecuted_blocks=1 01:21:19.924 01:21:19.924 ' 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:21:19.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:19.924 --rc genhtml_branch_coverage=1 01:21:19.924 --rc genhtml_function_coverage=1 01:21:19.924 --rc genhtml_legend=1 01:21:19.924 --rc geninfo_all_blocks=1 01:21:19.924 --rc geninfo_unexecuted_blocks=1 01:21:19.924 01:21:19.924 ' 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:21:19.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:19.924 --rc genhtml_branch_coverage=1 01:21:19.924 --rc genhtml_function_coverage=1 01:21:19.924 --rc genhtml_legend=1 01:21:19.924 --rc geninfo_all_blocks=1 01:21:19.924 --rc geninfo_unexecuted_blocks=1 01:21:19.924 01:21:19.924 ' 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:21:19.924 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:21:19.925 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@460 -- # nvmf_veth_init 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:21:19.925 Cannot find device "nvmf_init_br" 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:21:19.925 Cannot find device "nvmf_init_br2" 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:21:19.925 Cannot find device "nvmf_tgt_br" 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:21:19.925 Cannot find device "nvmf_tgt_br2" 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:21:19.925 Cannot find device "nvmf_init_br" 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:21:19.925 Cannot find device "nvmf_init_br2" 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:21:19.925 Cannot find device "nvmf_tgt_br" 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:21:19.925 Cannot find device "nvmf_tgt_br2" 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:21:19.925 Cannot find device "nvmf_br" 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:21:19.925 Cannot find device "nvmf_init_if" 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:21:19.925 Cannot find device "nvmf_init_if2" 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:21:19.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:21:19.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true 01:21:19.925 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:21:19.926 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:21:19.926 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:21:19.926 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:21:19.926 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:21:19.926 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:21:19.926 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:21:19.926 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:21:19.926 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:21:19.926 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:21:20.183 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:21:20.183 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 01:21:20.183 01:21:20.183 --- 10.0.0.3 ping statistics --- 01:21:20.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:20.183 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:21:20.183 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:21:20.183 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 01:21:20.183 01:21:20.183 --- 10.0.0.4 ping statistics --- 01:21:20.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:20.183 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:21:20.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:21:20.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 01:21:20.183 01:21:20.183 --- 10.0.0.1 ping statistics --- 01:21:20.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:20.183 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:21:20.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:21:20.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 01:21:20.183 01:21:20.183 --- 10.0.0.2 ping statistics --- 01:21:20.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:20.183 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@461 -- # return 0 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=73281 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 73281 01:21:20.183 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 73281 ']' 01:21:20.184 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:21:20.184 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:20.184 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:20.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:20.184 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:20.184 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:20.184 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:20.184 [2024-12-09 05:16:11.678707] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:21:20.184 [2024-12-09 05:16:11.678768] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:21:20.442 [2024-12-09 05:16:11.810283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:21:20.442 [2024-12-09 05:16:11.840302] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:21:20.442 [2024-12-09 05:16:11.840334] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:21:20.442 [2024-12-09 05:16:11.840339] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:21:20.442 [2024-12-09 05:16:11.840343] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:21:20.442 [2024-12-09 05:16:11.840347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:21:20.442 [2024-12-09 05:16:11.840938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:21:20.442 [2024-12-09 05:16:11.841111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:21:20.442 [2024-12-09 05:16:11.841523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:21:20.442 [2024-12-09 05:16:11.841594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 01:21:21.013 "poll_groups": [ 01:21:21.013 { 01:21:21.013 "admin_qpairs": 0, 01:21:21.013 "completed_nvme_io": 0, 01:21:21.013 "current_admin_qpairs": 0, 01:21:21.013 "current_io_qpairs": 0, 01:21:21.013 "io_qpairs": 0, 01:21:21.013 "name": "nvmf_tgt_poll_group_000", 01:21:21.013 "pending_bdev_io": 0, 01:21:21.013 "transports": [] 01:21:21.013 }, 01:21:21.013 { 01:21:21.013 "admin_qpairs": 0, 01:21:21.013 "completed_nvme_io": 0, 01:21:21.013 "current_admin_qpairs": 0, 01:21:21.013 "current_io_qpairs": 0, 01:21:21.013 "io_qpairs": 0, 01:21:21.013 "name": "nvmf_tgt_poll_group_001", 01:21:21.013 "pending_bdev_io": 0, 01:21:21.013 "transports": [] 01:21:21.013 }, 01:21:21.013 { 01:21:21.013 "admin_qpairs": 0, 01:21:21.013 "completed_nvme_io": 0, 01:21:21.013 "current_admin_qpairs": 0, 01:21:21.013 "current_io_qpairs": 0, 01:21:21.013 "io_qpairs": 0, 01:21:21.013 "name": "nvmf_tgt_poll_group_002", 01:21:21.013 "pending_bdev_io": 0, 01:21:21.013 "transports": [] 01:21:21.013 }, 01:21:21.013 { 01:21:21.013 "admin_qpairs": 0, 01:21:21.013 "completed_nvme_io": 0, 01:21:21.013 "current_admin_qpairs": 0, 01:21:21.013 "current_io_qpairs": 0, 01:21:21.013 "io_qpairs": 0, 01:21:21.013 "name": "nvmf_tgt_poll_group_003", 01:21:21.013 "pending_bdev_io": 0, 01:21:21.013 "transports": [] 01:21:21.013 } 01:21:21.013 ], 01:21:21.013 "tick_rate": 2600000000 01:21:21.013 }' 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:21.013 [2024-12-09 05:16:12.560266] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 01:21:21.013 "poll_groups": [ 01:21:21.013 { 01:21:21.013 "admin_qpairs": 0, 01:21:21.013 "completed_nvme_io": 0, 01:21:21.013 "current_admin_qpairs": 0, 01:21:21.013 "current_io_qpairs": 0, 01:21:21.013 "io_qpairs": 0, 01:21:21.013 "name": "nvmf_tgt_poll_group_000", 01:21:21.013 "pending_bdev_io": 0, 01:21:21.013 "transports": [ 01:21:21.013 { 01:21:21.013 "trtype": "TCP" 01:21:21.013 } 01:21:21.013 ] 01:21:21.013 }, 01:21:21.013 { 01:21:21.013 "admin_qpairs": 0, 01:21:21.013 "completed_nvme_io": 0, 01:21:21.013 "current_admin_qpairs": 0, 01:21:21.013 "current_io_qpairs": 0, 01:21:21.013 "io_qpairs": 0, 01:21:21.013 "name": "nvmf_tgt_poll_group_001", 01:21:21.013 "pending_bdev_io": 0, 01:21:21.013 "transports": [ 01:21:21.013 { 01:21:21.013 "trtype": "TCP" 01:21:21.013 } 01:21:21.013 ] 01:21:21.013 }, 01:21:21.013 { 01:21:21.013 "admin_qpairs": 0, 01:21:21.013 "completed_nvme_io": 0, 01:21:21.013 "current_admin_qpairs": 0, 01:21:21.013 "current_io_qpairs": 0, 01:21:21.013 "io_qpairs": 0, 01:21:21.013 "name": "nvmf_tgt_poll_group_002", 01:21:21.013 "pending_bdev_io": 0, 01:21:21.013 "transports": [ 01:21:21.013 { 01:21:21.013 "trtype": "TCP" 01:21:21.013 } 01:21:21.013 ] 01:21:21.013 }, 01:21:21.013 { 01:21:21.013 "admin_qpairs": 0, 01:21:21.013 "completed_nvme_io": 0, 01:21:21.013 "current_admin_qpairs": 0, 01:21:21.013 "current_io_qpairs": 0, 01:21:21.013 "io_qpairs": 0, 01:21:21.013 "name": "nvmf_tgt_poll_group_003", 01:21:21.013 "pending_bdev_io": 0, 01:21:21.013 "transports": [ 01:21:21.013 { 01:21:21.013 "trtype": "TCP" 01:21:21.013 } 01:21:21.013 ] 01:21:21.013 } 01:21:21.013 ], 01:21:21.013 "tick_rate": 2600000000 01:21:21.013 }' 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 01:21:21.013 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 01:21:21.271 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 01:21:21.271 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 01:21:21.271 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 01:21:21.271 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 01:21:21.271 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 01:21:21.271 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:21.271 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:21.271 Malloc1 01:21:21.271 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:21.271 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:21:21.271 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:21.271 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:21.271 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:21.272 [2024-12-09 05:16:12.710528] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -a 10.0.0.3 -s 4420 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -a 10.0.0.3 -s 4420 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -a 10.0.0.3 -s 4420 01:21:21.272 [2024-12-09 05:16:12.732800] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784' 01:21:21.272 Failed to write to /dev/nvme-fabrics: Input/output error 01:21:21.272 could not add new controller: failed to write to nvme-fabrics device 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:21.272 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:21:21.529 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 01:21:21.529 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 01:21:21.529 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:21:21.529 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:21:21.529 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 01:21:23.427 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:21:23.427 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:21:23.427 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:21:23.427 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:21:23.427 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:21:23.427 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 01:21:23.427 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:21:23.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:21:23.427 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:21:23.427 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 01:21:23.427 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:21:23.427 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:21:23.427 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:21:23.427 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:21:23.427 [2024-12-09 05:16:15.033688] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784' 01:21:23.427 Failed to write to /dev/nvme-fabrics: Input/output error 01:21:23.427 could not add new controller: failed to write to nvme-fabrics device 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:23.427 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:23.683 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:23.684 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:21:23.684 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 01:21:23.684 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 01:21:23.684 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:21:23.684 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:21:23.684 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:21:26.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:26.213 [2024-12-09 05:16:17.315873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:21:26.213 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:21:28.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:28.117 [2024-12-09 05:16:19.593941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:28.117 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:21:28.376 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 01:21:28.377 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 01:21:28.377 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:21:28.377 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:21:28.377 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:21:30.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:30.275 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:30.276 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:30.276 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:21:30.276 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:30.276 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:30.276 [2024-12-09 05:16:21.876375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:21:30.276 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:30.276 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 01:21:30.276 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:30.276 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:30.276 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:30.276 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 01:21:30.276 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:30.276 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:30.535 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:30.535 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:21:30.535 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 01:21:30.535 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 01:21:30.535 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:21:30.535 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:21:30.535 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 01:21:32.449 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:21:32.449 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:21:32.707 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:21:32.707 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:21:32.707 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:21:32.707 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 01:21:32.707 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:21:32.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:21:32.707 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:21:32.707 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 01:21:32.707 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:32.708 [2024-12-09 05:16:24.162781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:32.708 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:21:32.966 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 01:21:32.966 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 01:21:32.966 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:21:32.966 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:21:32.966 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:21:34.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:34.866 [2024-12-09 05:16:26.445347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:34.866 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:21:35.124 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 01:21:35.124 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 01:21:35.124 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:21:35.124 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:21:35.124 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 01:21:37.024 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:21:37.024 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:21:37.281 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:21:37.281 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:21:37.281 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:21:37.281 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:21:37.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.282 [2024-12-09 05:16:28.739652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.282 [2024-12-09 05:16:28.791683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.282 [2024-12-09 05:16:28.839710] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.282 [2024-12-09 05:16:28.887732] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.282 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.540 [2024-12-09 05:16:28.935777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 01:21:37.540 "poll_groups": [ 01:21:37.540 { 01:21:37.540 "admin_qpairs": 2, 01:21:37.540 "completed_nvme_io": 66, 01:21:37.540 "current_admin_qpairs": 0, 01:21:37.540 "current_io_qpairs": 0, 01:21:37.540 "io_qpairs": 16, 01:21:37.540 "name": "nvmf_tgt_poll_group_000", 01:21:37.540 "pending_bdev_io": 0, 01:21:37.540 "transports": [ 01:21:37.540 { 01:21:37.540 "trtype": "TCP" 01:21:37.540 } 01:21:37.540 ] 01:21:37.540 }, 01:21:37.540 { 01:21:37.540 "admin_qpairs": 3, 01:21:37.540 "completed_nvme_io": 117, 01:21:37.540 "current_admin_qpairs": 0, 01:21:37.540 "current_io_qpairs": 0, 01:21:37.540 "io_qpairs": 17, 01:21:37.540 "name": "nvmf_tgt_poll_group_001", 01:21:37.540 "pending_bdev_io": 0, 01:21:37.540 "transports": [ 01:21:37.540 { 01:21:37.540 "trtype": "TCP" 01:21:37.540 } 01:21:37.540 ] 01:21:37.540 }, 01:21:37.540 { 01:21:37.540 "admin_qpairs": 1, 01:21:37.540 "completed_nvme_io": 169, 01:21:37.540 "current_admin_qpairs": 0, 01:21:37.540 "current_io_qpairs": 0, 01:21:37.540 "io_qpairs": 19, 01:21:37.540 "name": "nvmf_tgt_poll_group_002", 01:21:37.540 "pending_bdev_io": 0, 01:21:37.540 "transports": [ 01:21:37.540 { 01:21:37.540 "trtype": "TCP" 01:21:37.540 } 01:21:37.540 ] 01:21:37.540 }, 01:21:37.540 { 01:21:37.540 "admin_qpairs": 1, 01:21:37.540 "completed_nvme_io": 68, 01:21:37.540 "current_admin_qpairs": 0, 01:21:37.540 "current_io_qpairs": 0, 01:21:37.540 "io_qpairs": 18, 01:21:37.540 "name": "nvmf_tgt_poll_group_003", 01:21:37.540 "pending_bdev_io": 0, 01:21:37.540 "transports": [ 01:21:37.540 { 01:21:37.540 "trtype": "TCP" 01:21:37.540 } 01:21:37.540 ] 01:21:37.540 } 01:21:37.540 ], 01:21:37.540 "tick_rate": 2600000000 01:21:37.540 }' 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 01:21:37.540 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:21:37.540 rmmod nvme_tcp 01:21:37.540 rmmod nvme_fabrics 01:21:37.540 rmmod nvme_keyring 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 73281 ']' 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 73281 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 73281 ']' 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 73281 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:21:37.540 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73281 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:21:37.798 killing process with pid 73281 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73281' 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 73281 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 73281 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:21:37.798 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:21:38.055 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:21:38.055 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:21:38.055 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:21:38.055 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:21:38.055 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns 01:21:38.055 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:21:38.055 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:21:38.055 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:21:38.055 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0 01:21:38.055 01:21:38.055 real 0m18.376s 01:21:38.055 user 1m9.132s 01:21:38.055 sys 0m1.802s 01:21:38.055 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:38.055 ************************************ 01:21:38.055 END TEST nvmf_rpc 01:21:38.055 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 01:21:38.055 ************************************ 01:21:38.055 05:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 01:21:38.055 05:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:21:38.055 05:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:38.055 05:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:21:38.055 ************************************ 01:21:38.055 START TEST nvmf_invalid 01:21:38.055 ************************************ 01:21:38.055 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 01:21:38.055 * Looking for test storage... 01:21:38.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:21:38.055 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:21:38.055 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:21:38.055 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:21:38.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:38.314 --rc genhtml_branch_coverage=1 01:21:38.314 --rc genhtml_function_coverage=1 01:21:38.314 --rc genhtml_legend=1 01:21:38.314 --rc geninfo_all_blocks=1 01:21:38.314 --rc geninfo_unexecuted_blocks=1 01:21:38.314 01:21:38.314 ' 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:21:38.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:38.314 --rc genhtml_branch_coverage=1 01:21:38.314 --rc genhtml_function_coverage=1 01:21:38.314 --rc genhtml_legend=1 01:21:38.314 --rc geninfo_all_blocks=1 01:21:38.314 --rc geninfo_unexecuted_blocks=1 01:21:38.314 01:21:38.314 ' 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:21:38.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:38.314 --rc genhtml_branch_coverage=1 01:21:38.314 --rc genhtml_function_coverage=1 01:21:38.314 --rc genhtml_legend=1 01:21:38.314 --rc geninfo_all_blocks=1 01:21:38.314 --rc geninfo_unexecuted_blocks=1 01:21:38.314 01:21:38.314 ' 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:21:38.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:38.314 --rc genhtml_branch_coverage=1 01:21:38.314 --rc genhtml_function_coverage=1 01:21:38.314 --rc genhtml_legend=1 01:21:38.314 --rc geninfo_all_blocks=1 01:21:38.314 --rc geninfo_unexecuted_blocks=1 01:21:38.314 01:21:38.314 ' 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:21:38.314 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:21:38.315 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@460 -- # nvmf_veth_init 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:21:38.315 Cannot find device "nvmf_init_br" 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:21:38.315 Cannot find device "nvmf_init_br2" 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:21:38.315 Cannot find device "nvmf_tgt_br" 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:21:38.315 Cannot find device "nvmf_tgt_br2" 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:21:38.315 Cannot find device "nvmf_init_br" 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:21:38.315 Cannot find device "nvmf_init_br2" 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:21:38.315 Cannot find device "nvmf_tgt_br" 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:21:38.315 Cannot find device "nvmf_tgt_br2" 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:21:38.315 Cannot find device "nvmf_br" 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:21:38.315 Cannot find device "nvmf_init_if" 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:21:38.315 Cannot find device "nvmf_init_if2" 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:21:38.315 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:21:38.315 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:21:38.315 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:21:38.593 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:21:38.593 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:21:38.593 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:21:38.593 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:21:38.593 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:21:38.593 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:21:38.593 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:21:38.593 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:21:38.593 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:21:38.593 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:21:38.593 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:21:38.593 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:21:38.593 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:21:38.593 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:21:38.593 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:21:38.593 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:21:38.593 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:21:38.593 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:21:38.593 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:21:38.593 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:21:38.593 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:21:38.593 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:21:38.593 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:21:38.593 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:21:38.593 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:21:38.593 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:21:38.593 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:21:38.593 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 01:21:38.593 01:21:38.593 --- 10.0.0.3 ping statistics --- 01:21:38.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:38.594 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:21:38.594 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:21:38.594 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 01:21:38.594 01:21:38.594 --- 10.0.0.4 ping statistics --- 01:21:38.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:38.594 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:21:38.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:21:38.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 01:21:38.594 01:21:38.594 --- 10.0.0.1 ping statistics --- 01:21:38.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:38.594 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:21:38.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:21:38.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 01:21:38.594 01:21:38.594 --- 10.0.0.2 ping statistics --- 01:21:38.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:38.594 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@461 -- # return 0 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=73842 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 73842 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 73842 ']' 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:38.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:38.594 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 01:21:38.594 [2024-12-09 05:16:30.103263] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:21:38.594 [2024-12-09 05:16:30.103322] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:21:38.926 [2024-12-09 05:16:30.245349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:21:38.926 [2024-12-09 05:16:30.281292] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:21:38.926 [2024-12-09 05:16:30.281332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:21:38.926 [2024-12-09 05:16:30.281339] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:21:38.926 [2024-12-09 05:16:30.281344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:21:38.926 [2024-12-09 05:16:30.281348] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:21:38.926 [2024-12-09 05:16:30.282056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:21:38.926 [2024-12-09 05:16:30.282541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:21:38.926 [2024-12-09 05:16:30.282920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:21:38.926 [2024-12-09 05:16:30.283009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:21:39.490 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:21:39.490 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 01:21:39.490 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:21:39.490 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 01:21:39.490 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 01:21:39.490 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:21:39.490 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 01:21:39.490 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode21196 01:21:39.746 [2024-12-09 05:16:31.198883] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 01:21:39.746 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/12/09 05:16:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode21196 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 01:21:39.746 request: 01:21:39.746 { 01:21:39.746 "method": "nvmf_create_subsystem", 01:21:39.746 "params": { 01:21:39.746 "nqn": "nqn.2016-06.io.spdk:cnode21196", 01:21:39.746 "tgt_name": "foobar" 01:21:39.746 } 01:21:39.746 } 01:21:39.746 Got JSON-RPC error response 01:21:39.746 GoRPCClient: error on JSON-RPC call' 01:21:39.746 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/12/09 05:16:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode21196 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 01:21:39.746 request: 01:21:39.746 { 01:21:39.746 "method": "nvmf_create_subsystem", 01:21:39.746 "params": { 01:21:39.746 "nqn": "nqn.2016-06.io.spdk:cnode21196", 01:21:39.746 "tgt_name": "foobar" 01:21:39.746 } 01:21:39.746 } 01:21:39.746 Got JSON-RPC error response 01:21:39.746 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 01:21:39.746 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 01:21:39.746 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode31808 01:21:40.004 [2024-12-09 05:16:31.411055] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31808: invalid serial number 'SPDKISFASTANDAWESOME' 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/12/09 05:16:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode31808 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 01:21:40.004 request: 01:21:40.004 { 01:21:40.004 "method": "nvmf_create_subsystem", 01:21:40.004 "params": { 01:21:40.004 "nqn": "nqn.2016-06.io.spdk:cnode31808", 01:21:40.004 "serial_number": "SPDKISFASTANDAWESOME\u001f" 01:21:40.004 } 01:21:40.004 } 01:21:40.004 Got JSON-RPC error response 01:21:40.004 GoRPCClient: error on JSON-RPC call' 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/12/09 05:16:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode31808 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 01:21:40.004 request: 01:21:40.004 { 01:21:40.004 "method": "nvmf_create_subsystem", 01:21:40.004 "params": { 01:21:40.004 "nqn": "nqn.2016-06.io.spdk:cnode31808", 01:21:40.004 "serial_number": "SPDKISFASTANDAWESOME\u001f" 01:21:40.004 } 01:21:40.004 } 01:21:40.004 Got JSON-RPC error response 01:21:40.004 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode10982 01:21:40.004 [2024-12-09 05:16:31.575178] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10982: invalid model number 'SPDK_Controller' 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/12/09 05:16:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode10982], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 01:21:40.004 request: 01:21:40.004 { 01:21:40.004 "method": "nvmf_create_subsystem", 01:21:40.004 "params": { 01:21:40.004 "nqn": "nqn.2016-06.io.spdk:cnode10982", 01:21:40.004 "model_number": "SPDK_Controller\u001f" 01:21:40.004 } 01:21:40.004 } 01:21:40.004 Got JSON-RPC error response 01:21:40.004 GoRPCClient: error on JSON-RPC call' 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/12/09 05:16:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode10982], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 01:21:40.004 request: 01:21:40.004 { 01:21:40.004 "method": "nvmf_create_subsystem", 01:21:40.004 "params": { 01:21:40.004 "nqn": "nqn.2016-06.io.spdk:cnode10982", 01:21:40.004 "model_number": "SPDK_Controller\u001f" 01:21:40.004 } 01:21:40.004 } 01:21:40.004 Got JSON-RPC error response 01:21:40.004 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 01:21:40.004 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 01:21:40.261 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 01:21:40.261 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 9 == \- ]] 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '9)c]$K![YMCp1q|SD!C3' 01:21:40.262 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '9)c]$K![YMCp1q|SD!C3' nqn.2016-06.io.spdk:cnode18998 01:21:40.262 [2024-12-09 05:16:31.859421] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18998: invalid serial number '9)c]$K![YMCp1q|SD!C3' 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/12/09 05:16:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18998 serial_number:9)c]$K![YMCp1q|SD!C3], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 9)c]$K![YMCp1q|SD!C3 01:21:40.521 request: 01:21:40.521 { 01:21:40.521 "method": "nvmf_create_subsystem", 01:21:40.521 "params": { 01:21:40.521 "nqn": "nqn.2016-06.io.spdk:cnode18998", 01:21:40.521 "serial_number": "9)\u007fc]$K![YMCp1q|SD!C3" 01:21:40.521 } 01:21:40.521 } 01:21:40.521 Got JSON-RPC error response 01:21:40.521 GoRPCClient: error on JSON-RPC call' 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/12/09 05:16:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18998 serial_number:9)c]$K![YMCp1q|SD!C3], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 9)c]$K![YMCp1q|SD!C3 01:21:40.521 request: 01:21:40.521 { 01:21:40.521 "method": "nvmf_create_subsystem", 01:21:40.521 "params": { 01:21:40.521 "nqn": "nqn.2016-06.io.spdk:cnode18998", 01:21:40.521 "serial_number": "9)\u007fc]$K![YMCp1q|SD!C3" 01:21:40.521 } 01:21:40.521 } 01:21:40.521 Got JSON-RPC error response 01:21:40.521 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.521 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 01:21:40.522 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 01:21:40.522 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 01:21:40.522 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 01:21:40.522 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 01:21:40.522 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 01:21:40.522 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.522 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.522 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 01:21:40.522 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 01:21:40.522 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 01:21:40.522 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.523 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.523 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 01:21:40.523 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 01:21:40.523 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 01:21:40.523 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.523 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.523 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 01:21:40.523 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 01:21:40.523 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 01:21:40.523 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 01:21:40.523 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 01:21:40.523 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ m == \- ]] 01:21:40.523 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'mh)C|dY_MccVCcRaiPR{i#esok7!q/d*'\''gmJ0>ej>' 01:21:40.523 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'mh)C|dY_MccVCcRaiPR{i#esok7!q/d*'\''gmJ0>ej>' nqn.2016-06.io.spdk:cnode22025 01:21:40.780 [2024-12-09 05:16:32.199719] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22025: invalid model number 'mh)C|dY_MccVCcRaiPR{i#esok7!q/d*'gmJ0>ej>' 01:21:40.780 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/12/09 05:16:32 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:mh)C|dY_MccVCcRaiPR{i#esok7!q/d*'\''gmJ0>ej> nqn:nqn.2016-06.io.spdk:cnode22025], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN mh)C|dY_MccVCcRaiPR{i#esok7!q/d*'\''gmJ0>ej> 01:21:40.780 request: 01:21:40.780 { 01:21:40.780 "method": "nvmf_create_subsystem", 01:21:40.780 "params": { 01:21:40.780 "nqn": "nqn.2016-06.io.spdk:cnode22025", 01:21:40.780 "model_number": "mh)C|dY_MccVCcRaiPR{i#esok7!q/d*'\''gmJ0>ej>" 01:21:40.780 } 01:21:40.780 } 01:21:40.780 Got JSON-RPC error response 01:21:40.780 GoRPCClient: error on JSON-RPC call' 01:21:40.780 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/12/09 05:16:32 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:mh)C|dY_MccVCcRaiPR{i#esok7!q/d*'gmJ0>ej> nqn:nqn.2016-06.io.spdk:cnode22025], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN mh)C|dY_MccVCcRaiPR{i#esok7!q/d*'gmJ0>ej> 01:21:40.780 request: 01:21:40.780 { 01:21:40.780 "method": "nvmf_create_subsystem", 01:21:40.780 "params": { 01:21:40.780 "nqn": "nqn.2016-06.io.spdk:cnode22025", 01:21:40.780 "model_number": "mh)C|dY_MccVCcRaiPR{i#esok7!q/d*'gmJ0>ej>" 01:21:40.780 } 01:21:40.780 } 01:21:40.780 Got JSON-RPC error response 01:21:40.780 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 01:21:40.780 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 01:21:41.119 [2024-12-09 05:16:32.403940] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:21:41.119 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 01:21:41.119 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 01:21:41.119 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 01:21:41.119 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 01:21:41.119 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 01:21:41.119 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 01:21:41.380 [2024-12-09 05:16:32.780258] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 01:21:41.380 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/12/09 05:16:32 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 01:21:41.380 request: 01:21:41.380 { 01:21:41.380 "method": "nvmf_subsystem_remove_listener", 01:21:41.380 "params": { 01:21:41.380 "nqn": "nqn.2016-06.io.spdk:cnode", 01:21:41.380 "listen_address": { 01:21:41.380 "trtype": "tcp", 01:21:41.380 "traddr": "", 01:21:41.380 "trsvcid": "4421" 01:21:41.380 } 01:21:41.380 } 01:21:41.380 } 01:21:41.380 Got JSON-RPC error response 01:21:41.380 GoRPCClient: error on JSON-RPC call' 01:21:41.380 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/12/09 05:16:32 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 01:21:41.380 request: 01:21:41.380 { 01:21:41.380 "method": "nvmf_subsystem_remove_listener", 01:21:41.380 "params": { 01:21:41.380 "nqn": "nqn.2016-06.io.spdk:cnode", 01:21:41.380 "listen_address": { 01:21:41.380 "trtype": "tcp", 01:21:41.380 "traddr": "", 01:21:41.380 "trsvcid": "4421" 01:21:41.380 } 01:21:41.380 } 01:21:41.380 } 01:21:41.380 Got JSON-RPC error response 01:21:41.380 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 01:21:41.380 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17771 -i 0 01:21:41.380 [2024-12-09 05:16:32.980361] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17771: invalid cntlid range [0-65519] 01:21:41.641 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/12/09 05:16:32 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode17771], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 01:21:41.641 request: 01:21:41.641 { 01:21:41.641 "method": "nvmf_create_subsystem", 01:21:41.641 "params": { 01:21:41.641 "nqn": "nqn.2016-06.io.spdk:cnode17771", 01:21:41.641 "min_cntlid": 0 01:21:41.641 } 01:21:41.641 } 01:21:41.641 Got JSON-RPC error response 01:21:41.641 GoRPCClient: error on JSON-RPC call' 01:21:41.641 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/12/09 05:16:32 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode17771], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 01:21:41.641 request: 01:21:41.641 { 01:21:41.641 "method": "nvmf_create_subsystem", 01:21:41.641 "params": { 01:21:41.641 "nqn": "nqn.2016-06.io.spdk:cnode17771", 01:21:41.641 "min_cntlid": 0 01:21:41.641 } 01:21:41.641 } 01:21:41.641 Got JSON-RPC error response 01:21:41.641 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 01:21:41.641 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21351 -i 65520 01:21:41.641 [2024-12-09 05:16:33.144478] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21351: invalid cntlid range [65520-65519] 01:21:41.641 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/12/09 05:16:33 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode21351], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 01:21:41.641 request: 01:21:41.641 { 01:21:41.641 "method": "nvmf_create_subsystem", 01:21:41.641 "params": { 01:21:41.641 "nqn": "nqn.2016-06.io.spdk:cnode21351", 01:21:41.641 "min_cntlid": 65520 01:21:41.641 } 01:21:41.641 } 01:21:41.641 Got JSON-RPC error response 01:21:41.641 GoRPCClient: error on JSON-RPC call' 01:21:41.641 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/12/09 05:16:33 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode21351], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 01:21:41.641 request: 01:21:41.641 { 01:21:41.641 "method": "nvmf_create_subsystem", 01:21:41.641 "params": { 01:21:41.641 "nqn": "nqn.2016-06.io.spdk:cnode21351", 01:21:41.641 "min_cntlid": 65520 01:21:41.641 } 01:21:41.641 } 01:21:41.641 Got JSON-RPC error response 01:21:41.641 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 01:21:41.641 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11747 -I 0 01:21:41.903 [2024-12-09 05:16:33.316582] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11747: invalid cntlid range [1-0] 01:21:41.903 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/12/09 05:16:33 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode11747], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 01:21:41.903 request: 01:21:41.903 { 01:21:41.903 "method": "nvmf_create_subsystem", 01:21:41.903 "params": { 01:21:41.903 "nqn": "nqn.2016-06.io.spdk:cnode11747", 01:21:41.903 "max_cntlid": 0 01:21:41.903 } 01:21:41.903 } 01:21:41.903 Got JSON-RPC error response 01:21:41.903 GoRPCClient: error on JSON-RPC call' 01:21:41.903 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/12/09 05:16:33 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode11747], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 01:21:41.903 request: 01:21:41.903 { 01:21:41.903 "method": "nvmf_create_subsystem", 01:21:41.903 "params": { 01:21:41.903 "nqn": "nqn.2016-06.io.spdk:cnode11747", 01:21:41.903 "max_cntlid": 0 01:21:41.903 } 01:21:41.903 } 01:21:41.903 Got JSON-RPC error response 01:21:41.903 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 01:21:41.903 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12037 -I 65520 01:21:42.165 [2024-12-09 05:16:33.521410] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12037: invalid cntlid range [1-65520] 01:21:42.165 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/12/09 05:16:33 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode12037], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 01:21:42.165 request: 01:21:42.165 { 01:21:42.165 "method": "nvmf_create_subsystem", 01:21:42.165 "params": { 01:21:42.165 "nqn": "nqn.2016-06.io.spdk:cnode12037", 01:21:42.165 "max_cntlid": 65520 01:21:42.165 } 01:21:42.165 } 01:21:42.165 Got JSON-RPC error response 01:21:42.165 GoRPCClient: error on JSON-RPC call' 01:21:42.165 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/12/09 05:16:33 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode12037], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 01:21:42.165 request: 01:21:42.165 { 01:21:42.165 "method": "nvmf_create_subsystem", 01:21:42.165 "params": { 01:21:42.165 "nqn": "nqn.2016-06.io.spdk:cnode12037", 01:21:42.165 "max_cntlid": 65520 01:21:42.165 } 01:21:42.165 } 01:21:42.165 Got JSON-RPC error response 01:21:42.165 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 01:21:42.165 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19215 -i 6 -I 5 01:21:42.165 [2024-12-09 05:16:33.725547] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19215: invalid cntlid range [6-5] 01:21:42.165 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/12/09 05:16:33 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode19215], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 01:21:42.165 request: 01:21:42.165 { 01:21:42.165 "method": "nvmf_create_subsystem", 01:21:42.165 "params": { 01:21:42.165 "nqn": "nqn.2016-06.io.spdk:cnode19215", 01:21:42.165 "min_cntlid": 6, 01:21:42.165 "max_cntlid": 5 01:21:42.165 } 01:21:42.165 } 01:21:42.165 Got JSON-RPC error response 01:21:42.165 GoRPCClient: error on JSON-RPC call' 01:21:42.165 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/12/09 05:16:33 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode19215], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 01:21:42.165 request: 01:21:42.165 { 01:21:42.165 "method": "nvmf_create_subsystem", 01:21:42.165 "params": { 01:21:42.165 "nqn": "nqn.2016-06.io.spdk:cnode19215", 01:21:42.165 "min_cntlid": 6, 01:21:42.165 "max_cntlid": 5 01:21:42.165 } 01:21:42.165 } 01:21:42.165 Got JSON-RPC error response 01:21:42.165 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 01:21:42.165 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 01:21:42.426 { 01:21:42.426 "name": "foobar", 01:21:42.426 "method": "nvmf_delete_target", 01:21:42.426 "req_id": 1 01:21:42.426 } 01:21:42.426 Got JSON-RPC error response 01:21:42.426 response: 01:21:42.426 { 01:21:42.426 "code": -32602, 01:21:42.426 "message": "The specified target doesn'\''t exist, cannot delete it." 01:21:42.426 }' 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 01:21:42.426 { 01:21:42.426 "name": "foobar", 01:21:42.426 "method": "nvmf_delete_target", 01:21:42.426 "req_id": 1 01:21:42.426 } 01:21:42.426 Got JSON-RPC error response 01:21:42.426 response: 01:21:42.426 { 01:21:42.426 "code": -32602, 01:21:42.426 "message": "The specified target doesn't exist, cannot delete it." 01:21:42.426 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:21:42.426 rmmod nvme_tcp 01:21:42.426 rmmod nvme_fabrics 01:21:42.426 rmmod nvme_keyring 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 73842 ']' 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 73842 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 73842 ']' 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 73842 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73842 01:21:42.426 killing process with pid 73842 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73842' 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 73842 01:21:42.426 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 73842 01:21:42.687 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:21:42.687 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:21:42.687 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:21:42.687 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 01:21:42.687 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 01:21:42.687 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:21:42.687 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 01:21:42.688 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:21:42.688 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:21:42.688 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:21:42.688 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:21:42.688 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:21:42.688 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:21:42.688 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:21:42.688 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:21:42.688 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:21:42.688 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:21:42.688 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:21:42.688 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:21:42.688 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:21:42.688 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:21:42.688 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:21:42.688 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@246 -- # remove_spdk_ns 01:21:42.688 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:21:42.688 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:21:42.688 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0 01:21:42.949 ************************************ 01:21:42.949 END TEST nvmf_invalid 01:21:42.949 ************************************ 01:21:42.949 01:21:42.949 real 0m4.708s 01:21:42.949 user 0m17.673s 01:21:42.949 sys 0m0.979s 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:21:42.949 ************************************ 01:21:42.949 START TEST nvmf_connect_stress 01:21:42.949 ************************************ 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 01:21:42.949 * Looking for test storage... 01:21:42.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:21:42.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:42.949 --rc genhtml_branch_coverage=1 01:21:42.949 --rc genhtml_function_coverage=1 01:21:42.949 --rc genhtml_legend=1 01:21:42.949 --rc geninfo_all_blocks=1 01:21:42.949 --rc geninfo_unexecuted_blocks=1 01:21:42.949 01:21:42.949 ' 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:21:42.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:42.949 --rc genhtml_branch_coverage=1 01:21:42.949 --rc genhtml_function_coverage=1 01:21:42.949 --rc genhtml_legend=1 01:21:42.949 --rc geninfo_all_blocks=1 01:21:42.949 --rc geninfo_unexecuted_blocks=1 01:21:42.949 01:21:42.949 ' 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:21:42.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:42.949 --rc genhtml_branch_coverage=1 01:21:42.949 --rc genhtml_function_coverage=1 01:21:42.949 --rc genhtml_legend=1 01:21:42.949 --rc geninfo_all_blocks=1 01:21:42.949 --rc geninfo_unexecuted_blocks=1 01:21:42.949 01:21:42.949 ' 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:21:42.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:42.949 --rc genhtml_branch_coverage=1 01:21:42.949 --rc genhtml_function_coverage=1 01:21:42.949 --rc genhtml_legend=1 01:21:42.949 --rc geninfo_all_blocks=1 01:21:42.949 --rc geninfo_unexecuted_blocks=1 01:21:42.949 01:21:42.949 ' 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:21:42.949 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:21:42.950 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:21:42.950 Cannot find device "nvmf_init_br" 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:21:42.950 Cannot find device "nvmf_init_br2" 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:21:42.950 Cannot find device "nvmf_tgt_br" 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true 01:21:42.950 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:21:43.212 Cannot find device "nvmf_tgt_br2" 01:21:43.212 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true 01:21:43.212 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:21:43.212 Cannot find device "nvmf_init_br" 01:21:43.212 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 01:21:43.212 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:21:43.212 Cannot find device "nvmf_init_br2" 01:21:43.212 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 01:21:43.212 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:21:43.212 Cannot find device "nvmf_tgt_br" 01:21:43.212 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true 01:21:43.212 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:21:43.212 Cannot find device "nvmf_tgt_br2" 01:21:43.212 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true 01:21:43.212 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:21:43.212 Cannot find device "nvmf_br" 01:21:43.212 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:21:43.213 Cannot find device "nvmf_init_if" 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:21:43.213 Cannot find device "nvmf_init_if2" 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:21:43.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:21:43.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:21:43.213 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:21:43.213 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 01:21:43.213 01:21:43.213 --- 10.0.0.3 ping statistics --- 01:21:43.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:43.213 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:21:43.213 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:21:43.213 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 01:21:43.213 01:21:43.213 --- 10.0.0.4 ping statistics --- 01:21:43.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:43.213 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:21:43.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:21:43.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 01:21:43.213 01:21:43.213 --- 10.0.0.1 ping statistics --- 01:21:43.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:43.213 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:21:43.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:21:43.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 01:21:43.213 01:21:43.213 --- 10.0.0.2 ping statistics --- 01:21:43.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:43.213 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@461 -- # return 0 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=74392 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 74392 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 74392 ']' 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:43.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:43.213 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 01:21:43.472 [2024-12-09 05:16:34.856241] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:21:43.472 [2024-12-09 05:16:34.856391] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:21:43.472 [2024-12-09 05:16:34.996997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:21:43.472 [2024-12-09 05:16:35.036536] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:21:43.472 [2024-12-09 05:16:35.036584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:21:43.472 [2024-12-09 05:16:35.036590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:21:43.472 [2024-12-09 05:16:35.036596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:21:43.472 [2024-12-09 05:16:35.036600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:21:43.472 [2024-12-09 05:16:35.037438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:21:43.472 [2024-12-09 05:16:35.038236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:21:43.472 [2024-12-09 05:16:35.038343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:44.413 [2024-12-09 05:16:35.774871] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:44.413 [2024-12-09 05:16:35.791982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:44.413 NULL1 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=74439 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:44.413 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:44.675 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:44.675 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:44.675 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:44.675 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:44.675 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:44.936 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:44.936 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:44.936 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:44.936 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:44.936 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:45.505 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:45.505 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:45.505 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:45.505 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:45.505 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:45.762 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:45.762 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:45.762 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:45.762 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:45.762 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:46.019 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:46.019 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:46.019 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:46.019 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:46.019 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:46.277 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:46.277 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:46.277 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:46.277 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:46.277 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:46.534 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:46.534 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:46.534 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:46.534 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:46.534 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:47.100 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:47.100 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:47.100 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:47.100 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:47.100 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:47.360 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:47.360 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:47.360 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:47.360 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:47.360 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:47.620 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:47.620 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:47.620 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:47.620 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:47.620 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:47.880 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:47.880 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:47.880 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:47.880 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:47.880 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:48.140 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:48.140 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:48.140 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:48.140 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:48.140 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:48.719 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:48.719 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:48.719 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:48.719 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:48.719 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:48.977 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:48.977 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:48.977 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:48.977 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:48.977 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:49.235 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:49.235 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:49.235 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:49.235 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:49.235 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:49.494 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:49.494 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:49.494 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:49.494 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:49.494 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:49.752 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:49.752 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:49.752 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:49.752 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:49.752 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:50.322 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:50.322 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:50.322 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:50.322 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:50.322 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:50.584 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:50.584 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:50.584 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:50.584 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:50.584 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:50.845 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:50.845 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:50.845 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:50.845 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:50.845 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:51.106 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:51.106 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:51.106 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:51.106 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:51.106 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:51.367 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:51.367 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:51.367 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:51.367 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:51.367 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:51.628 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:51.628 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:51.628 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:51.628 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:51.628 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:52.198 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:52.198 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:52.198 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:52.198 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:52.198 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:52.459 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:52.459 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:52.459 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:52.459 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:52.459 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:52.723 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:52.723 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:52.723 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:52.723 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:52.723 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:52.985 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:52.985 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:52.985 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:52.985 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:52.985 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:53.244 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:53.244 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:53.244 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:53.244 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:53.244 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:53.815 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:53.815 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:53.815 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:53.815 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:53.815 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:54.076 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:54.076 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:54.076 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:54.076 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:54.076 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:54.336 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:54.336 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:54.336 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 01:21:54.336 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:54.336 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:54.608 Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:21:54.608 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:54.608 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74439 01:21:54.608 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (74439) - No such process 01:21:54.608 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 74439 01:21:54.608 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 01:21:54.608 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 01:21:54.608 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 01:21:54.608 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 01:21:54.608 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 01:21:54.608 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:21:54.608 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 01:21:54.608 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 01:21:54.608 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:21:54.608 rmmod nvme_tcp 01:21:54.608 rmmod nvme_fabrics 01:21:54.608 rmmod nvme_keyring 01:21:54.608 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:21:54.608 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 01:21:54.608 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 01:21:54.608 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 74392 ']' 01:21:54.608 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 74392 01:21:54.608 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 74392 ']' 01:21:54.608 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 74392 01:21:54.608 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 01:21:54.608 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:21:54.608 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74392 01:21:54.869 killing process with pid 74392 01:21:54.869 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:21:54.869 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:21:54.869 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74392' 01:21:54.869 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 74392 01:21:54.869 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 74392 01:21:54.869 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:21:54.869 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:21:54.869 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:21:54.869 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 01:21:54.869 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 01:21:54.869 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:21:54.869 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 01:21:54.869 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:21:54.869 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:21:54.869 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:21:54.869 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:21:54.869 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:21:54.869 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:21:54.869 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:21:54.870 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:21:54.870 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:21:54.870 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:21:54.870 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:21:54.870 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:21:55.130 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:21:55.130 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:21:55.130 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:21:55.130 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 01:21:55.130 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:21:55.130 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:21:55.130 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:21:55.130 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0 01:21:55.130 01:21:55.130 real 0m12.217s 01:21:55.130 user 0m41.123s 01:21:55.130 sys 0m2.609s 01:21:55.130 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:55.130 ************************************ 01:21:55.130 END TEST nvmf_connect_stress 01:21:55.130 ************************************ 01:21:55.130 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 01:21:55.130 05:16:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 01:21:55.130 05:16:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:21:55.130 05:16:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:55.130 05:16:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:21:55.130 ************************************ 01:21:55.130 START TEST nvmf_fused_ordering 01:21:55.130 ************************************ 01:21:55.130 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 01:21:55.130 * Looking for test storage... 01:21:55.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:21:55.130 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:21:55.131 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 01:21:55.131 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:21:55.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:55.393 --rc genhtml_branch_coverage=1 01:21:55.393 --rc genhtml_function_coverage=1 01:21:55.393 --rc genhtml_legend=1 01:21:55.393 --rc geninfo_all_blocks=1 01:21:55.393 --rc geninfo_unexecuted_blocks=1 01:21:55.393 01:21:55.393 ' 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:21:55.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:55.393 --rc genhtml_branch_coverage=1 01:21:55.393 --rc genhtml_function_coverage=1 01:21:55.393 --rc genhtml_legend=1 01:21:55.393 --rc geninfo_all_blocks=1 01:21:55.393 --rc geninfo_unexecuted_blocks=1 01:21:55.393 01:21:55.393 ' 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:21:55.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:55.393 --rc genhtml_branch_coverage=1 01:21:55.393 --rc genhtml_function_coverage=1 01:21:55.393 --rc genhtml_legend=1 01:21:55.393 --rc geninfo_all_blocks=1 01:21:55.393 --rc geninfo_unexecuted_blocks=1 01:21:55.393 01:21:55.393 ' 01:21:55.393 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:21:55.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:55.393 --rc genhtml_branch_coverage=1 01:21:55.393 --rc genhtml_function_coverage=1 01:21:55.393 --rc genhtml_legend=1 01:21:55.393 --rc geninfo_all_blocks=1 01:21:55.393 --rc geninfo_unexecuted_blocks=1 01:21:55.393 01:21:55.394 ' 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:21:55.394 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@460 -- # nvmf_veth_init 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:21:55.394 Cannot find device "nvmf_init_br" 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:21:55.394 Cannot find device "nvmf_init_br2" 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:21:55.394 Cannot find device "nvmf_tgt_br" 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:21:55.394 Cannot find device "nvmf_tgt_br2" 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:21:55.394 Cannot find device "nvmf_init_br" 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:21:55.394 Cannot find device "nvmf_init_br2" 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:21:55.394 Cannot find device "nvmf_tgt_br" 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true 01:21:55.394 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:21:55.394 Cannot find device "nvmf_tgt_br2" 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:21:55.395 Cannot find device "nvmf_br" 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:21:55.395 Cannot find device "nvmf_init_if" 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:21:55.395 Cannot find device "nvmf_init_if2" 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:21:55.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:21:55.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:21:55.395 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:21:55.656 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:21:55.656 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:21:55.656 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:21:55.656 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:21:55.656 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:21:55.656 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:21:55.656 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:21:55.656 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:21:55.656 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:21:55.656 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:21:55.656 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:21:55.656 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 01:21:55.656 01:21:55.656 --- 10.0.0.3 ping statistics --- 01:21:55.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:55.656 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 01:21:55.656 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:21:55.656 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:21:55.656 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 01:21:55.656 01:21:55.656 --- 10.0.0.4 ping statistics --- 01:21:55.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:55.656 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 01:21:55.656 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:21:55.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:21:55.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 01:21:55.656 01:21:55.656 --- 10.0.0.1 ping statistics --- 01:21:55.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:55.656 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 01:21:55.656 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:21:55.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:21:55.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.033 ms 01:21:55.656 01:21:55.656 --- 10.0.0.2 ping statistics --- 01:21:55.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:55.656 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 01:21:55.656 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:21:55.656 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@461 -- # return 0 01:21:55.656 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:21:55.656 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:21:55.656 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:21:55.656 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:21:55.656 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:21:55.656 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:21:55.656 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:21:55.657 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 01:21:55.657 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:21:55.657 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 01:21:55.657 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 01:21:55.657 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=74830 01:21:55.657 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 74830 01:21:55.657 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 74830 ']' 01:21:55.657 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:21:55.657 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:55.657 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:55.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:55.657 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:55.657 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:55.657 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 01:21:55.657 [2024-12-09 05:16:47.116765] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:21:55.657 [2024-12-09 05:16:47.116806] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:21:55.657 [2024-12-09 05:16:47.255706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:55.917 [2024-12-09 05:16:47.295596] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:21:55.917 [2024-12-09 05:16:47.295632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:21:55.917 [2024-12-09 05:16:47.295639] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:21:55.917 [2024-12-09 05:16:47.295644] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:21:55.917 [2024-12-09 05:16:47.295649] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:21:55.917 [2024-12-09 05:16:47.295927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:21:56.491 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:21:56.491 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 01:21:56.491 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:21:56.491 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 01:21:56.491 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 01:21:56.491 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:21:56.491 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:21:56.491 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:56.491 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 01:21:56.491 [2024-12-09 05:16:48.062638] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:21:56.491 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:56.491 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:21:56.491 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:56.491 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 01:21:56.491 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:56.492 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:21:56.492 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:56.492 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 01:21:56.492 [2024-12-09 05:16:48.078745] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:21:56.492 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:56.492 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 01:21:56.492 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:56.492 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 01:21:56.492 NULL1 01:21:56.492 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:56.492 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 01:21:56.492 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:56.492 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 01:21:56.492 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:56.492 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 01:21:56.492 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:56.492 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 01:21:56.753 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:56.753 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 01:21:56.753 [2024-12-09 05:16:48.130265] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:21:56.753 [2024-12-09 05:16:48.130310] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74880 ] 01:21:57.015 Attached to nqn.2016-06.io.spdk:cnode1 01:21:57.015 Namespace ID: 1 size: 1GB 01:21:57.015 fused_ordering(0) 01:21:57.015 fused_ordering(1) 01:21:57.015 fused_ordering(2) 01:21:57.015 fused_ordering(3) 01:21:57.015 fused_ordering(4) 01:21:57.015 fused_ordering(5) 01:21:57.015 fused_ordering(6) 01:21:57.015 fused_ordering(7) 01:21:57.015 fused_ordering(8) 01:21:57.015 fused_ordering(9) 01:21:57.015 fused_ordering(10) 01:21:57.015 fused_ordering(11) 01:21:57.015 fused_ordering(12) 01:21:57.015 fused_ordering(13) 01:21:57.015 fused_ordering(14) 01:21:57.015 fused_ordering(15) 01:21:57.015 fused_ordering(16) 01:21:57.015 fused_ordering(17) 01:21:57.015 fused_ordering(18) 01:21:57.015 fused_ordering(19) 01:21:57.015 fused_ordering(20) 01:21:57.015 fused_ordering(21) 01:21:57.015 fused_ordering(22) 01:21:57.015 fused_ordering(23) 01:21:57.015 fused_ordering(24) 01:21:57.015 fused_ordering(25) 01:21:57.015 fused_ordering(26) 01:21:57.015 fused_ordering(27) 01:21:57.015 fused_ordering(28) 01:21:57.015 fused_ordering(29) 01:21:57.015 fused_ordering(30) 01:21:57.015 fused_ordering(31) 01:21:57.015 fused_ordering(32) 01:21:57.015 fused_ordering(33) 01:21:57.015 fused_ordering(34) 01:21:57.015 fused_ordering(35) 01:21:57.015 fused_ordering(36) 01:21:57.015 fused_ordering(37) 01:21:57.015 fused_ordering(38) 01:21:57.015 fused_ordering(39) 01:21:57.015 fused_ordering(40) 01:21:57.015 fused_ordering(41) 01:21:57.015 fused_ordering(42) 01:21:57.015 fused_ordering(43) 01:21:57.015 fused_ordering(44) 01:21:57.015 fused_ordering(45) 01:21:57.015 fused_ordering(46) 01:21:57.015 fused_ordering(47) 01:21:57.015 fused_ordering(48) 01:21:57.015 fused_ordering(49) 01:21:57.015 fused_ordering(50) 01:21:57.015 fused_ordering(51) 01:21:57.015 fused_ordering(52) 01:21:57.015 fused_ordering(53) 01:21:57.015 fused_ordering(54) 01:21:57.015 fused_ordering(55) 01:21:57.015 fused_ordering(56) 01:21:57.015 fused_ordering(57) 01:21:57.015 fused_ordering(58) 01:21:57.015 fused_ordering(59) 01:21:57.015 fused_ordering(60) 01:21:57.015 fused_ordering(61) 01:21:57.015 fused_ordering(62) 01:21:57.015 fused_ordering(63) 01:21:57.015 fused_ordering(64) 01:21:57.015 fused_ordering(65) 01:21:57.015 fused_ordering(66) 01:21:57.015 fused_ordering(67) 01:21:57.015 fused_ordering(68) 01:21:57.015 fused_ordering(69) 01:21:57.015 fused_ordering(70) 01:21:57.015 fused_ordering(71) 01:21:57.015 fused_ordering(72) 01:21:57.015 fused_ordering(73) 01:21:57.015 fused_ordering(74) 01:21:57.015 fused_ordering(75) 01:21:57.015 fused_ordering(76) 01:21:57.015 fused_ordering(77) 01:21:57.015 fused_ordering(78) 01:21:57.015 fused_ordering(79) 01:21:57.015 fused_ordering(80) 01:21:57.015 fused_ordering(81) 01:21:57.015 fused_ordering(82) 01:21:57.015 fused_ordering(83) 01:21:57.015 fused_ordering(84) 01:21:57.015 fused_ordering(85) 01:21:57.015 fused_ordering(86) 01:21:57.015 fused_ordering(87) 01:21:57.015 fused_ordering(88) 01:21:57.015 fused_ordering(89) 01:21:57.015 fused_ordering(90) 01:21:57.015 fused_ordering(91) 01:21:57.015 fused_ordering(92) 01:21:57.015 fused_ordering(93) 01:21:57.015 fused_ordering(94) 01:21:57.015 fused_ordering(95) 01:21:57.015 fused_ordering(96) 01:21:57.015 fused_ordering(97) 01:21:57.015 fused_ordering(98) 01:21:57.015 fused_ordering(99) 01:21:57.015 fused_ordering(100) 01:21:57.015 fused_ordering(101) 01:21:57.015 fused_ordering(102) 01:21:57.015 fused_ordering(103) 01:21:57.015 fused_ordering(104) 01:21:57.015 fused_ordering(105) 01:21:57.015 fused_ordering(106) 01:21:57.015 fused_ordering(107) 01:21:57.015 fused_ordering(108) 01:21:57.015 fused_ordering(109) 01:21:57.015 fused_ordering(110) 01:21:57.015 fused_ordering(111) 01:21:57.015 fused_ordering(112) 01:21:57.015 fused_ordering(113) 01:21:57.015 fused_ordering(114) 01:21:57.015 fused_ordering(115) 01:21:57.015 fused_ordering(116) 01:21:57.015 fused_ordering(117) 01:21:57.015 fused_ordering(118) 01:21:57.015 fused_ordering(119) 01:21:57.015 fused_ordering(120) 01:21:57.015 fused_ordering(121) 01:21:57.015 fused_ordering(122) 01:21:57.015 fused_ordering(123) 01:21:57.015 fused_ordering(124) 01:21:57.015 fused_ordering(125) 01:21:57.015 fused_ordering(126) 01:21:57.015 fused_ordering(127) 01:21:57.015 fused_ordering(128) 01:21:57.015 fused_ordering(129) 01:21:57.015 fused_ordering(130) 01:21:57.015 fused_ordering(131) 01:21:57.015 fused_ordering(132) 01:21:57.015 fused_ordering(133) 01:21:57.015 fused_ordering(134) 01:21:57.015 fused_ordering(135) 01:21:57.015 fused_ordering(136) 01:21:57.015 fused_ordering(137) 01:21:57.015 fused_ordering(138) 01:21:57.015 fused_ordering(139) 01:21:57.015 fused_ordering(140) 01:21:57.015 fused_ordering(141) 01:21:57.015 fused_ordering(142) 01:21:57.015 fused_ordering(143) 01:21:57.015 fused_ordering(144) 01:21:57.015 fused_ordering(145) 01:21:57.016 fused_ordering(146) 01:21:57.016 fused_ordering(147) 01:21:57.016 fused_ordering(148) 01:21:57.016 fused_ordering(149) 01:21:57.016 fused_ordering(150) 01:21:57.016 fused_ordering(151) 01:21:57.016 fused_ordering(152) 01:21:57.016 fused_ordering(153) 01:21:57.016 fused_ordering(154) 01:21:57.016 fused_ordering(155) 01:21:57.016 fused_ordering(156) 01:21:57.016 fused_ordering(157) 01:21:57.016 fused_ordering(158) 01:21:57.016 fused_ordering(159) 01:21:57.016 fused_ordering(160) 01:21:57.016 fused_ordering(161) 01:21:57.016 fused_ordering(162) 01:21:57.016 fused_ordering(163) 01:21:57.016 fused_ordering(164) 01:21:57.016 fused_ordering(165) 01:21:57.016 fused_ordering(166) 01:21:57.016 fused_ordering(167) 01:21:57.016 fused_ordering(168) 01:21:57.016 fused_ordering(169) 01:21:57.016 fused_ordering(170) 01:21:57.016 fused_ordering(171) 01:21:57.016 fused_ordering(172) 01:21:57.016 fused_ordering(173) 01:21:57.016 fused_ordering(174) 01:21:57.016 fused_ordering(175) 01:21:57.016 fused_ordering(176) 01:21:57.016 fused_ordering(177) 01:21:57.016 fused_ordering(178) 01:21:57.016 fused_ordering(179) 01:21:57.016 fused_ordering(180) 01:21:57.016 fused_ordering(181) 01:21:57.016 fused_ordering(182) 01:21:57.016 fused_ordering(183) 01:21:57.016 fused_ordering(184) 01:21:57.016 fused_ordering(185) 01:21:57.016 fused_ordering(186) 01:21:57.016 fused_ordering(187) 01:21:57.016 fused_ordering(188) 01:21:57.016 fused_ordering(189) 01:21:57.016 fused_ordering(190) 01:21:57.016 fused_ordering(191) 01:21:57.016 fused_ordering(192) 01:21:57.016 fused_ordering(193) 01:21:57.016 fused_ordering(194) 01:21:57.016 fused_ordering(195) 01:21:57.016 fused_ordering(196) 01:21:57.016 fused_ordering(197) 01:21:57.016 fused_ordering(198) 01:21:57.016 fused_ordering(199) 01:21:57.016 fused_ordering(200) 01:21:57.016 fused_ordering(201) 01:21:57.016 fused_ordering(202) 01:21:57.016 fused_ordering(203) 01:21:57.016 fused_ordering(204) 01:21:57.016 fused_ordering(205) 01:21:57.275 fused_ordering(206) 01:21:57.275 fused_ordering(207) 01:21:57.275 fused_ordering(208) 01:21:57.275 fused_ordering(209) 01:21:57.275 fused_ordering(210) 01:21:57.275 fused_ordering(211) 01:21:57.275 fused_ordering(212) 01:21:57.275 fused_ordering(213) 01:21:57.275 fused_ordering(214) 01:21:57.275 fused_ordering(215) 01:21:57.275 fused_ordering(216) 01:21:57.275 fused_ordering(217) 01:21:57.275 fused_ordering(218) 01:21:57.275 fused_ordering(219) 01:21:57.275 fused_ordering(220) 01:21:57.276 fused_ordering(221) 01:21:57.276 fused_ordering(222) 01:21:57.276 fused_ordering(223) 01:21:57.276 fused_ordering(224) 01:21:57.276 fused_ordering(225) 01:21:57.276 fused_ordering(226) 01:21:57.276 fused_ordering(227) 01:21:57.276 fused_ordering(228) 01:21:57.276 fused_ordering(229) 01:21:57.276 fused_ordering(230) 01:21:57.276 fused_ordering(231) 01:21:57.276 fused_ordering(232) 01:21:57.276 fused_ordering(233) 01:21:57.276 fused_ordering(234) 01:21:57.276 fused_ordering(235) 01:21:57.276 fused_ordering(236) 01:21:57.276 fused_ordering(237) 01:21:57.276 fused_ordering(238) 01:21:57.276 fused_ordering(239) 01:21:57.276 fused_ordering(240) 01:21:57.276 fused_ordering(241) 01:21:57.276 fused_ordering(242) 01:21:57.276 fused_ordering(243) 01:21:57.276 fused_ordering(244) 01:21:57.276 fused_ordering(245) 01:21:57.276 fused_ordering(246) 01:21:57.276 fused_ordering(247) 01:21:57.276 fused_ordering(248) 01:21:57.276 fused_ordering(249) 01:21:57.276 fused_ordering(250) 01:21:57.276 fused_ordering(251) 01:21:57.276 fused_ordering(252) 01:21:57.276 fused_ordering(253) 01:21:57.276 fused_ordering(254) 01:21:57.276 fused_ordering(255) 01:21:57.276 fused_ordering(256) 01:21:57.276 fused_ordering(257) 01:21:57.276 fused_ordering(258) 01:21:57.276 fused_ordering(259) 01:21:57.276 fused_ordering(260) 01:21:57.276 fused_ordering(261) 01:21:57.276 fused_ordering(262) 01:21:57.276 fused_ordering(263) 01:21:57.276 fused_ordering(264) 01:21:57.276 fused_ordering(265) 01:21:57.276 fused_ordering(266) 01:21:57.276 fused_ordering(267) 01:21:57.276 fused_ordering(268) 01:21:57.276 fused_ordering(269) 01:21:57.276 fused_ordering(270) 01:21:57.276 fused_ordering(271) 01:21:57.276 fused_ordering(272) 01:21:57.276 fused_ordering(273) 01:21:57.276 fused_ordering(274) 01:21:57.276 fused_ordering(275) 01:21:57.276 fused_ordering(276) 01:21:57.276 fused_ordering(277) 01:21:57.276 fused_ordering(278) 01:21:57.276 fused_ordering(279) 01:21:57.276 fused_ordering(280) 01:21:57.276 fused_ordering(281) 01:21:57.276 fused_ordering(282) 01:21:57.276 fused_ordering(283) 01:21:57.276 fused_ordering(284) 01:21:57.276 fused_ordering(285) 01:21:57.276 fused_ordering(286) 01:21:57.276 fused_ordering(287) 01:21:57.276 fused_ordering(288) 01:21:57.276 fused_ordering(289) 01:21:57.276 fused_ordering(290) 01:21:57.276 fused_ordering(291) 01:21:57.276 fused_ordering(292) 01:21:57.276 fused_ordering(293) 01:21:57.276 fused_ordering(294) 01:21:57.276 fused_ordering(295) 01:21:57.276 fused_ordering(296) 01:21:57.276 fused_ordering(297) 01:21:57.276 fused_ordering(298) 01:21:57.276 fused_ordering(299) 01:21:57.276 fused_ordering(300) 01:21:57.276 fused_ordering(301) 01:21:57.276 fused_ordering(302) 01:21:57.276 fused_ordering(303) 01:21:57.276 fused_ordering(304) 01:21:57.276 fused_ordering(305) 01:21:57.276 fused_ordering(306) 01:21:57.276 fused_ordering(307) 01:21:57.276 fused_ordering(308) 01:21:57.276 fused_ordering(309) 01:21:57.276 fused_ordering(310) 01:21:57.276 fused_ordering(311) 01:21:57.276 fused_ordering(312) 01:21:57.276 fused_ordering(313) 01:21:57.276 fused_ordering(314) 01:21:57.276 fused_ordering(315) 01:21:57.276 fused_ordering(316) 01:21:57.276 fused_ordering(317) 01:21:57.276 fused_ordering(318) 01:21:57.276 fused_ordering(319) 01:21:57.276 fused_ordering(320) 01:21:57.276 fused_ordering(321) 01:21:57.276 fused_ordering(322) 01:21:57.276 fused_ordering(323) 01:21:57.276 fused_ordering(324) 01:21:57.276 fused_ordering(325) 01:21:57.276 fused_ordering(326) 01:21:57.276 fused_ordering(327) 01:21:57.276 fused_ordering(328) 01:21:57.276 fused_ordering(329) 01:21:57.276 fused_ordering(330) 01:21:57.276 fused_ordering(331) 01:21:57.276 fused_ordering(332) 01:21:57.276 fused_ordering(333) 01:21:57.276 fused_ordering(334) 01:21:57.276 fused_ordering(335) 01:21:57.276 fused_ordering(336) 01:21:57.276 fused_ordering(337) 01:21:57.276 fused_ordering(338) 01:21:57.276 fused_ordering(339) 01:21:57.276 fused_ordering(340) 01:21:57.276 fused_ordering(341) 01:21:57.276 fused_ordering(342) 01:21:57.276 fused_ordering(343) 01:21:57.276 fused_ordering(344) 01:21:57.276 fused_ordering(345) 01:21:57.276 fused_ordering(346) 01:21:57.276 fused_ordering(347) 01:21:57.276 fused_ordering(348) 01:21:57.276 fused_ordering(349) 01:21:57.276 fused_ordering(350) 01:21:57.276 fused_ordering(351) 01:21:57.276 fused_ordering(352) 01:21:57.276 fused_ordering(353) 01:21:57.276 fused_ordering(354) 01:21:57.276 fused_ordering(355) 01:21:57.276 fused_ordering(356) 01:21:57.276 fused_ordering(357) 01:21:57.276 fused_ordering(358) 01:21:57.276 fused_ordering(359) 01:21:57.276 fused_ordering(360) 01:21:57.276 fused_ordering(361) 01:21:57.276 fused_ordering(362) 01:21:57.276 fused_ordering(363) 01:21:57.276 fused_ordering(364) 01:21:57.276 fused_ordering(365) 01:21:57.276 fused_ordering(366) 01:21:57.276 fused_ordering(367) 01:21:57.276 fused_ordering(368) 01:21:57.276 fused_ordering(369) 01:21:57.276 fused_ordering(370) 01:21:57.276 fused_ordering(371) 01:21:57.276 fused_ordering(372) 01:21:57.276 fused_ordering(373) 01:21:57.276 fused_ordering(374) 01:21:57.276 fused_ordering(375) 01:21:57.276 fused_ordering(376) 01:21:57.276 fused_ordering(377) 01:21:57.276 fused_ordering(378) 01:21:57.276 fused_ordering(379) 01:21:57.276 fused_ordering(380) 01:21:57.276 fused_ordering(381) 01:21:57.276 fused_ordering(382) 01:21:57.276 fused_ordering(383) 01:21:57.276 fused_ordering(384) 01:21:57.276 fused_ordering(385) 01:21:57.276 fused_ordering(386) 01:21:57.276 fused_ordering(387) 01:21:57.276 fused_ordering(388) 01:21:57.276 fused_ordering(389) 01:21:57.276 fused_ordering(390) 01:21:57.276 fused_ordering(391) 01:21:57.276 fused_ordering(392) 01:21:57.276 fused_ordering(393) 01:21:57.276 fused_ordering(394) 01:21:57.276 fused_ordering(395) 01:21:57.276 fused_ordering(396) 01:21:57.276 fused_ordering(397) 01:21:57.276 fused_ordering(398) 01:21:57.276 fused_ordering(399) 01:21:57.276 fused_ordering(400) 01:21:57.276 fused_ordering(401) 01:21:57.276 fused_ordering(402) 01:21:57.276 fused_ordering(403) 01:21:57.276 fused_ordering(404) 01:21:57.276 fused_ordering(405) 01:21:57.276 fused_ordering(406) 01:21:57.276 fused_ordering(407) 01:21:57.276 fused_ordering(408) 01:21:57.276 fused_ordering(409) 01:21:57.276 fused_ordering(410) 01:21:57.536 fused_ordering(411) 01:21:57.536 fused_ordering(412) 01:21:57.536 fused_ordering(413) 01:21:57.536 fused_ordering(414) 01:21:57.536 fused_ordering(415) 01:21:57.536 fused_ordering(416) 01:21:57.536 fused_ordering(417) 01:21:57.536 fused_ordering(418) 01:21:57.536 fused_ordering(419) 01:21:57.536 fused_ordering(420) 01:21:57.536 fused_ordering(421) 01:21:57.536 fused_ordering(422) 01:21:57.536 fused_ordering(423) 01:21:57.536 fused_ordering(424) 01:21:57.536 fused_ordering(425) 01:21:57.536 fused_ordering(426) 01:21:57.536 fused_ordering(427) 01:21:57.536 fused_ordering(428) 01:21:57.536 fused_ordering(429) 01:21:57.536 fused_ordering(430) 01:21:57.536 fused_ordering(431) 01:21:57.536 fused_ordering(432) 01:21:57.536 fused_ordering(433) 01:21:57.536 fused_ordering(434) 01:21:57.536 fused_ordering(435) 01:21:57.536 fused_ordering(436) 01:21:57.536 fused_ordering(437) 01:21:57.536 fused_ordering(438) 01:21:57.536 fused_ordering(439) 01:21:57.536 fused_ordering(440) 01:21:57.536 fused_ordering(441) 01:21:57.536 fused_ordering(442) 01:21:57.536 fused_ordering(443) 01:21:57.536 fused_ordering(444) 01:21:57.536 fused_ordering(445) 01:21:57.536 fused_ordering(446) 01:21:57.536 fused_ordering(447) 01:21:57.536 fused_ordering(448) 01:21:57.536 fused_ordering(449) 01:21:57.536 fused_ordering(450) 01:21:57.536 fused_ordering(451) 01:21:57.536 fused_ordering(452) 01:21:57.536 fused_ordering(453) 01:21:57.536 fused_ordering(454) 01:21:57.536 fused_ordering(455) 01:21:57.536 fused_ordering(456) 01:21:57.536 fused_ordering(457) 01:21:57.536 fused_ordering(458) 01:21:57.536 fused_ordering(459) 01:21:57.536 fused_ordering(460) 01:21:57.536 fused_ordering(461) 01:21:57.536 fused_ordering(462) 01:21:57.536 fused_ordering(463) 01:21:57.536 fused_ordering(464) 01:21:57.536 fused_ordering(465) 01:21:57.536 fused_ordering(466) 01:21:57.536 fused_ordering(467) 01:21:57.536 fused_ordering(468) 01:21:57.536 fused_ordering(469) 01:21:57.536 fused_ordering(470) 01:21:57.536 fused_ordering(471) 01:21:57.536 fused_ordering(472) 01:21:57.536 fused_ordering(473) 01:21:57.536 fused_ordering(474) 01:21:57.536 fused_ordering(475) 01:21:57.536 fused_ordering(476) 01:21:57.536 fused_ordering(477) 01:21:57.536 fused_ordering(478) 01:21:57.536 fused_ordering(479) 01:21:57.536 fused_ordering(480) 01:21:57.536 fused_ordering(481) 01:21:57.536 fused_ordering(482) 01:21:57.536 fused_ordering(483) 01:21:57.536 fused_ordering(484) 01:21:57.536 fused_ordering(485) 01:21:57.536 fused_ordering(486) 01:21:57.536 fused_ordering(487) 01:21:57.536 fused_ordering(488) 01:21:57.536 fused_ordering(489) 01:21:57.536 fused_ordering(490) 01:21:57.536 fused_ordering(491) 01:21:57.536 fused_ordering(492) 01:21:57.536 fused_ordering(493) 01:21:57.536 fused_ordering(494) 01:21:57.536 fused_ordering(495) 01:21:57.536 fused_ordering(496) 01:21:57.536 fused_ordering(497) 01:21:57.536 fused_ordering(498) 01:21:57.536 fused_ordering(499) 01:21:57.536 fused_ordering(500) 01:21:57.536 fused_ordering(501) 01:21:57.536 fused_ordering(502) 01:21:57.536 fused_ordering(503) 01:21:57.536 fused_ordering(504) 01:21:57.536 fused_ordering(505) 01:21:57.536 fused_ordering(506) 01:21:57.536 fused_ordering(507) 01:21:57.536 fused_ordering(508) 01:21:57.536 fused_ordering(509) 01:21:57.536 fused_ordering(510) 01:21:57.536 fused_ordering(511) 01:21:57.536 fused_ordering(512) 01:21:57.536 fused_ordering(513) 01:21:57.536 fused_ordering(514) 01:21:57.536 fused_ordering(515) 01:21:57.536 fused_ordering(516) 01:21:57.536 fused_ordering(517) 01:21:57.536 fused_ordering(518) 01:21:57.536 fused_ordering(519) 01:21:57.536 fused_ordering(520) 01:21:57.536 fused_ordering(521) 01:21:57.536 fused_ordering(522) 01:21:57.536 fused_ordering(523) 01:21:57.536 fused_ordering(524) 01:21:57.536 fused_ordering(525) 01:21:57.536 fused_ordering(526) 01:21:57.536 fused_ordering(527) 01:21:57.536 fused_ordering(528) 01:21:57.536 fused_ordering(529) 01:21:57.536 fused_ordering(530) 01:21:57.536 fused_ordering(531) 01:21:57.536 fused_ordering(532) 01:21:57.536 fused_ordering(533) 01:21:57.536 fused_ordering(534) 01:21:57.536 fused_ordering(535) 01:21:57.536 fused_ordering(536) 01:21:57.536 fused_ordering(537) 01:21:57.536 fused_ordering(538) 01:21:57.536 fused_ordering(539) 01:21:57.536 fused_ordering(540) 01:21:57.536 fused_ordering(541) 01:21:57.536 fused_ordering(542) 01:21:57.536 fused_ordering(543) 01:21:57.536 fused_ordering(544) 01:21:57.536 fused_ordering(545) 01:21:57.536 fused_ordering(546) 01:21:57.536 fused_ordering(547) 01:21:57.536 fused_ordering(548) 01:21:57.536 fused_ordering(549) 01:21:57.536 fused_ordering(550) 01:21:57.536 fused_ordering(551) 01:21:57.536 fused_ordering(552) 01:21:57.536 fused_ordering(553) 01:21:57.536 fused_ordering(554) 01:21:57.536 fused_ordering(555) 01:21:57.536 fused_ordering(556) 01:21:57.536 fused_ordering(557) 01:21:57.536 fused_ordering(558) 01:21:57.536 fused_ordering(559) 01:21:57.536 fused_ordering(560) 01:21:57.536 fused_ordering(561) 01:21:57.536 fused_ordering(562) 01:21:57.536 fused_ordering(563) 01:21:57.536 fused_ordering(564) 01:21:57.536 fused_ordering(565) 01:21:57.536 fused_ordering(566) 01:21:57.536 fused_ordering(567) 01:21:57.536 fused_ordering(568) 01:21:57.536 fused_ordering(569) 01:21:57.536 fused_ordering(570) 01:21:57.536 fused_ordering(571) 01:21:57.536 fused_ordering(572) 01:21:57.536 fused_ordering(573) 01:21:57.536 fused_ordering(574) 01:21:57.536 fused_ordering(575) 01:21:57.536 fused_ordering(576) 01:21:57.536 fused_ordering(577) 01:21:57.536 fused_ordering(578) 01:21:57.536 fused_ordering(579) 01:21:57.536 fused_ordering(580) 01:21:57.536 fused_ordering(581) 01:21:57.536 fused_ordering(582) 01:21:57.537 fused_ordering(583) 01:21:57.537 fused_ordering(584) 01:21:57.537 fused_ordering(585) 01:21:57.537 fused_ordering(586) 01:21:57.537 fused_ordering(587) 01:21:57.537 fused_ordering(588) 01:21:57.537 fused_ordering(589) 01:21:57.537 fused_ordering(590) 01:21:57.537 fused_ordering(591) 01:21:57.537 fused_ordering(592) 01:21:57.537 fused_ordering(593) 01:21:57.537 fused_ordering(594) 01:21:57.537 fused_ordering(595) 01:21:57.537 fused_ordering(596) 01:21:57.537 fused_ordering(597) 01:21:57.537 fused_ordering(598) 01:21:57.537 fused_ordering(599) 01:21:57.537 fused_ordering(600) 01:21:57.537 fused_ordering(601) 01:21:57.537 fused_ordering(602) 01:21:57.537 fused_ordering(603) 01:21:57.537 fused_ordering(604) 01:21:57.537 fused_ordering(605) 01:21:57.537 fused_ordering(606) 01:21:57.537 fused_ordering(607) 01:21:57.537 fused_ordering(608) 01:21:57.537 fused_ordering(609) 01:21:57.537 fused_ordering(610) 01:21:57.537 fused_ordering(611) 01:21:57.537 fused_ordering(612) 01:21:57.537 fused_ordering(613) 01:21:57.537 fused_ordering(614) 01:21:57.537 fused_ordering(615) 01:21:58.107 fused_ordering(616) 01:21:58.107 fused_ordering(617) 01:21:58.107 fused_ordering(618) 01:21:58.107 fused_ordering(619) 01:21:58.107 fused_ordering(620) 01:21:58.107 fused_ordering(621) 01:21:58.107 fused_ordering(622) 01:21:58.107 fused_ordering(623) 01:21:58.107 fused_ordering(624) 01:21:58.107 fused_ordering(625) 01:21:58.107 fused_ordering(626) 01:21:58.107 fused_ordering(627) 01:21:58.107 fused_ordering(628) 01:21:58.107 fused_ordering(629) 01:21:58.107 fused_ordering(630) 01:21:58.107 fused_ordering(631) 01:21:58.107 fused_ordering(632) 01:21:58.107 fused_ordering(633) 01:21:58.107 fused_ordering(634) 01:21:58.107 fused_ordering(635) 01:21:58.107 fused_ordering(636) 01:21:58.107 fused_ordering(637) 01:21:58.107 fused_ordering(638) 01:21:58.107 fused_ordering(639) 01:21:58.107 fused_ordering(640) 01:21:58.107 fused_ordering(641) 01:21:58.107 fused_ordering(642) 01:21:58.107 fused_ordering(643) 01:21:58.107 fused_ordering(644) 01:21:58.107 fused_ordering(645) 01:21:58.107 fused_ordering(646) 01:21:58.107 fused_ordering(647) 01:21:58.107 fused_ordering(648) 01:21:58.107 fused_ordering(649) 01:21:58.107 fused_ordering(650) 01:21:58.107 fused_ordering(651) 01:21:58.107 fused_ordering(652) 01:21:58.107 fused_ordering(653) 01:21:58.107 fused_ordering(654) 01:21:58.107 fused_ordering(655) 01:21:58.107 fused_ordering(656) 01:21:58.107 fused_ordering(657) 01:21:58.107 fused_ordering(658) 01:21:58.107 fused_ordering(659) 01:21:58.107 fused_ordering(660) 01:21:58.107 fused_ordering(661) 01:21:58.107 fused_ordering(662) 01:21:58.107 fused_ordering(663) 01:21:58.107 fused_ordering(664) 01:21:58.107 fused_ordering(665) 01:21:58.107 fused_ordering(666) 01:21:58.107 fused_ordering(667) 01:21:58.107 fused_ordering(668) 01:21:58.107 fused_ordering(669) 01:21:58.107 fused_ordering(670) 01:21:58.107 fused_ordering(671) 01:21:58.107 fused_ordering(672) 01:21:58.107 fused_ordering(673) 01:21:58.107 fused_ordering(674) 01:21:58.107 fused_ordering(675) 01:21:58.107 fused_ordering(676) 01:21:58.107 fused_ordering(677) 01:21:58.107 fused_ordering(678) 01:21:58.107 fused_ordering(679) 01:21:58.107 fused_ordering(680) 01:21:58.107 fused_ordering(681) 01:21:58.107 fused_ordering(682) 01:21:58.107 fused_ordering(683) 01:21:58.107 fused_ordering(684) 01:21:58.108 fused_ordering(685) 01:21:58.108 fused_ordering(686) 01:21:58.108 fused_ordering(687) 01:21:58.108 fused_ordering(688) 01:21:58.108 fused_ordering(689) 01:21:58.108 fused_ordering(690) 01:21:58.108 fused_ordering(691) 01:21:58.108 fused_ordering(692) 01:21:58.108 fused_ordering(693) 01:21:58.108 fused_ordering(694) 01:21:58.108 fused_ordering(695) 01:21:58.108 fused_ordering(696) 01:21:58.108 fused_ordering(697) 01:21:58.108 fused_ordering(698) 01:21:58.108 fused_ordering(699) 01:21:58.108 fused_ordering(700) 01:21:58.108 fused_ordering(701) 01:21:58.108 fused_ordering(702) 01:21:58.108 fused_ordering(703) 01:21:58.108 fused_ordering(704) 01:21:58.108 fused_ordering(705) 01:21:58.108 fused_ordering(706) 01:21:58.108 fused_ordering(707) 01:21:58.108 fused_ordering(708) 01:21:58.108 fused_ordering(709) 01:21:58.108 fused_ordering(710) 01:21:58.108 fused_ordering(711) 01:21:58.108 fused_ordering(712) 01:21:58.108 fused_ordering(713) 01:21:58.108 fused_ordering(714) 01:21:58.108 fused_ordering(715) 01:21:58.108 fused_ordering(716) 01:21:58.108 fused_ordering(717) 01:21:58.108 fused_ordering(718) 01:21:58.108 fused_ordering(719) 01:21:58.108 fused_ordering(720) 01:21:58.108 fused_ordering(721) 01:21:58.108 fused_ordering(722) 01:21:58.108 fused_ordering(723) 01:21:58.108 fused_ordering(724) 01:21:58.108 fused_ordering(725) 01:21:58.108 fused_ordering(726) 01:21:58.108 fused_ordering(727) 01:21:58.108 fused_ordering(728) 01:21:58.108 fused_ordering(729) 01:21:58.108 fused_ordering(730) 01:21:58.108 fused_ordering(731) 01:21:58.108 fused_ordering(732) 01:21:58.108 fused_ordering(733) 01:21:58.108 fused_ordering(734) 01:21:58.108 fused_ordering(735) 01:21:58.108 fused_ordering(736) 01:21:58.108 fused_ordering(737) 01:21:58.108 fused_ordering(738) 01:21:58.108 fused_ordering(739) 01:21:58.108 fused_ordering(740) 01:21:58.108 fused_ordering(741) 01:21:58.108 fused_ordering(742) 01:21:58.108 fused_ordering(743) 01:21:58.108 fused_ordering(744) 01:21:58.108 fused_ordering(745) 01:21:58.108 fused_ordering(746) 01:21:58.108 fused_ordering(747) 01:21:58.108 fused_ordering(748) 01:21:58.108 fused_ordering(749) 01:21:58.108 fused_ordering(750) 01:21:58.108 fused_ordering(751) 01:21:58.108 fused_ordering(752) 01:21:58.108 fused_ordering(753) 01:21:58.108 fused_ordering(754) 01:21:58.108 fused_ordering(755) 01:21:58.108 fused_ordering(756) 01:21:58.108 fused_ordering(757) 01:21:58.108 fused_ordering(758) 01:21:58.108 fused_ordering(759) 01:21:58.108 fused_ordering(760) 01:21:58.108 fused_ordering(761) 01:21:58.108 fused_ordering(762) 01:21:58.108 fused_ordering(763) 01:21:58.108 fused_ordering(764) 01:21:58.108 fused_ordering(765) 01:21:58.108 fused_ordering(766) 01:21:58.108 fused_ordering(767) 01:21:58.108 fused_ordering(768) 01:21:58.108 fused_ordering(769) 01:21:58.108 fused_ordering(770) 01:21:58.108 fused_ordering(771) 01:21:58.108 fused_ordering(772) 01:21:58.108 fused_ordering(773) 01:21:58.108 fused_ordering(774) 01:21:58.108 fused_ordering(775) 01:21:58.108 fused_ordering(776) 01:21:58.108 fused_ordering(777) 01:21:58.108 fused_ordering(778) 01:21:58.108 fused_ordering(779) 01:21:58.108 fused_ordering(780) 01:21:58.108 fused_ordering(781) 01:21:58.108 fused_ordering(782) 01:21:58.108 fused_ordering(783) 01:21:58.108 fused_ordering(784) 01:21:58.108 fused_ordering(785) 01:21:58.108 fused_ordering(786) 01:21:58.108 fused_ordering(787) 01:21:58.108 fused_ordering(788) 01:21:58.108 fused_ordering(789) 01:21:58.108 fused_ordering(790) 01:21:58.108 fused_ordering(791) 01:21:58.108 fused_ordering(792) 01:21:58.108 fused_ordering(793) 01:21:58.108 fused_ordering(794) 01:21:58.108 fused_ordering(795) 01:21:58.108 fused_ordering(796) 01:21:58.108 fused_ordering(797) 01:21:58.108 fused_ordering(798) 01:21:58.108 fused_ordering(799) 01:21:58.108 fused_ordering(800) 01:21:58.108 fused_ordering(801) 01:21:58.108 fused_ordering(802) 01:21:58.108 fused_ordering(803) 01:21:58.108 fused_ordering(804) 01:21:58.108 fused_ordering(805) 01:21:58.108 fused_ordering(806) 01:21:58.108 fused_ordering(807) 01:21:58.108 fused_ordering(808) 01:21:58.108 fused_ordering(809) 01:21:58.108 fused_ordering(810) 01:21:58.108 fused_ordering(811) 01:21:58.108 fused_ordering(812) 01:21:58.108 fused_ordering(813) 01:21:58.108 fused_ordering(814) 01:21:58.108 fused_ordering(815) 01:21:58.108 fused_ordering(816) 01:21:58.108 fused_ordering(817) 01:21:58.108 fused_ordering(818) 01:21:58.108 fused_ordering(819) 01:21:58.108 fused_ordering(820) 01:21:58.368 fused_ordering(821) 01:21:58.368 fused_ordering(822) 01:21:58.368 fused_ordering(823) 01:21:58.368 fused_ordering(824) 01:21:58.368 fused_ordering(825) 01:21:58.368 fused_ordering(826) 01:21:58.368 fused_ordering(827) 01:21:58.368 fused_ordering(828) 01:21:58.368 fused_ordering(829) 01:21:58.368 fused_ordering(830) 01:21:58.368 fused_ordering(831) 01:21:58.368 fused_ordering(832) 01:21:58.368 fused_ordering(833) 01:21:58.368 fused_ordering(834) 01:21:58.368 fused_ordering(835) 01:21:58.368 fused_ordering(836) 01:21:58.368 fused_ordering(837) 01:21:58.368 fused_ordering(838) 01:21:58.368 fused_ordering(839) 01:21:58.368 fused_ordering(840) 01:21:58.368 fused_ordering(841) 01:21:58.368 fused_ordering(842) 01:21:58.368 fused_ordering(843) 01:21:58.368 fused_ordering(844) 01:21:58.368 fused_ordering(845) 01:21:58.368 fused_ordering(846) 01:21:58.368 fused_ordering(847) 01:21:58.368 fused_ordering(848) 01:21:58.369 fused_ordering(849) 01:21:58.369 fused_ordering(850) 01:21:58.369 fused_ordering(851) 01:21:58.369 fused_ordering(852) 01:21:58.369 fused_ordering(853) 01:21:58.369 fused_ordering(854) 01:21:58.369 fused_ordering(855) 01:21:58.369 fused_ordering(856) 01:21:58.369 fused_ordering(857) 01:21:58.369 fused_ordering(858) 01:21:58.369 fused_ordering(859) 01:21:58.369 fused_ordering(860) 01:21:58.369 fused_ordering(861) 01:21:58.369 fused_ordering(862) 01:21:58.369 fused_ordering(863) 01:21:58.369 fused_ordering(864) 01:21:58.369 fused_ordering(865) 01:21:58.369 fused_ordering(866) 01:21:58.369 fused_ordering(867) 01:21:58.369 fused_ordering(868) 01:21:58.369 fused_ordering(869) 01:21:58.369 fused_ordering(870) 01:21:58.369 fused_ordering(871) 01:21:58.369 fused_ordering(872) 01:21:58.369 fused_ordering(873) 01:21:58.369 fused_ordering(874) 01:21:58.369 fused_ordering(875) 01:21:58.369 fused_ordering(876) 01:21:58.369 fused_ordering(877) 01:21:58.369 fused_ordering(878) 01:21:58.369 fused_ordering(879) 01:21:58.369 fused_ordering(880) 01:21:58.369 fused_ordering(881) 01:21:58.369 fused_ordering(882) 01:21:58.369 fused_ordering(883) 01:21:58.369 fused_ordering(884) 01:21:58.369 fused_ordering(885) 01:21:58.369 fused_ordering(886) 01:21:58.369 fused_ordering(887) 01:21:58.369 fused_ordering(888) 01:21:58.369 fused_ordering(889) 01:21:58.369 fused_ordering(890) 01:21:58.369 fused_ordering(891) 01:21:58.369 fused_ordering(892) 01:21:58.369 fused_ordering(893) 01:21:58.369 fused_ordering(894) 01:21:58.369 fused_ordering(895) 01:21:58.369 fused_ordering(896) 01:21:58.369 fused_ordering(897) 01:21:58.369 fused_ordering(898) 01:21:58.369 fused_ordering(899) 01:21:58.369 fused_ordering(900) 01:21:58.369 fused_ordering(901) 01:21:58.369 fused_ordering(902) 01:21:58.369 fused_ordering(903) 01:21:58.369 fused_ordering(904) 01:21:58.369 fused_ordering(905) 01:21:58.369 fused_ordering(906) 01:21:58.369 fused_ordering(907) 01:21:58.369 fused_ordering(908) 01:21:58.369 fused_ordering(909) 01:21:58.369 fused_ordering(910) 01:21:58.369 fused_ordering(911) 01:21:58.369 fused_ordering(912) 01:21:58.369 fused_ordering(913) 01:21:58.369 fused_ordering(914) 01:21:58.369 fused_ordering(915) 01:21:58.369 fused_ordering(916) 01:21:58.369 fused_ordering(917) 01:21:58.369 fused_ordering(918) 01:21:58.369 fused_ordering(919) 01:21:58.369 fused_ordering(920) 01:21:58.369 fused_ordering(921) 01:21:58.369 fused_ordering(922) 01:21:58.369 fused_ordering(923) 01:21:58.369 fused_ordering(924) 01:21:58.369 fused_ordering(925) 01:21:58.369 fused_ordering(926) 01:21:58.369 fused_ordering(927) 01:21:58.369 fused_ordering(928) 01:21:58.369 fused_ordering(929) 01:21:58.369 fused_ordering(930) 01:21:58.369 fused_ordering(931) 01:21:58.369 fused_ordering(932) 01:21:58.369 fused_ordering(933) 01:21:58.369 fused_ordering(934) 01:21:58.369 fused_ordering(935) 01:21:58.369 fused_ordering(936) 01:21:58.369 fused_ordering(937) 01:21:58.369 fused_ordering(938) 01:21:58.369 fused_ordering(939) 01:21:58.369 fused_ordering(940) 01:21:58.369 fused_ordering(941) 01:21:58.369 fused_ordering(942) 01:21:58.369 fused_ordering(943) 01:21:58.369 fused_ordering(944) 01:21:58.369 fused_ordering(945) 01:21:58.369 fused_ordering(946) 01:21:58.369 fused_ordering(947) 01:21:58.369 fused_ordering(948) 01:21:58.369 fused_ordering(949) 01:21:58.369 fused_ordering(950) 01:21:58.369 fused_ordering(951) 01:21:58.369 fused_ordering(952) 01:21:58.369 fused_ordering(953) 01:21:58.369 fused_ordering(954) 01:21:58.369 fused_ordering(955) 01:21:58.369 fused_ordering(956) 01:21:58.369 fused_ordering(957) 01:21:58.369 fused_ordering(958) 01:21:58.369 fused_ordering(959) 01:21:58.369 fused_ordering(960) 01:21:58.369 fused_ordering(961) 01:21:58.369 fused_ordering(962) 01:21:58.369 fused_ordering(963) 01:21:58.369 fused_ordering(964) 01:21:58.369 fused_ordering(965) 01:21:58.369 fused_ordering(966) 01:21:58.369 fused_ordering(967) 01:21:58.369 fused_ordering(968) 01:21:58.369 fused_ordering(969) 01:21:58.369 fused_ordering(970) 01:21:58.369 fused_ordering(971) 01:21:58.369 fused_ordering(972) 01:21:58.369 fused_ordering(973) 01:21:58.369 fused_ordering(974) 01:21:58.369 fused_ordering(975) 01:21:58.369 fused_ordering(976) 01:21:58.369 fused_ordering(977) 01:21:58.369 fused_ordering(978) 01:21:58.369 fused_ordering(979) 01:21:58.369 fused_ordering(980) 01:21:58.369 fused_ordering(981) 01:21:58.369 fused_ordering(982) 01:21:58.369 fused_ordering(983) 01:21:58.369 fused_ordering(984) 01:21:58.369 fused_ordering(985) 01:21:58.369 fused_ordering(986) 01:21:58.369 fused_ordering(987) 01:21:58.369 fused_ordering(988) 01:21:58.369 fused_ordering(989) 01:21:58.369 fused_ordering(990) 01:21:58.369 fused_ordering(991) 01:21:58.369 fused_ordering(992) 01:21:58.369 fused_ordering(993) 01:21:58.369 fused_ordering(994) 01:21:58.369 fused_ordering(995) 01:21:58.369 fused_ordering(996) 01:21:58.369 fused_ordering(997) 01:21:58.369 fused_ordering(998) 01:21:58.369 fused_ordering(999) 01:21:58.369 fused_ordering(1000) 01:21:58.369 fused_ordering(1001) 01:21:58.369 fused_ordering(1002) 01:21:58.369 fused_ordering(1003) 01:21:58.369 fused_ordering(1004) 01:21:58.369 fused_ordering(1005) 01:21:58.369 fused_ordering(1006) 01:21:58.369 fused_ordering(1007) 01:21:58.369 fused_ordering(1008) 01:21:58.369 fused_ordering(1009) 01:21:58.369 fused_ordering(1010) 01:21:58.369 fused_ordering(1011) 01:21:58.369 fused_ordering(1012) 01:21:58.369 fused_ordering(1013) 01:21:58.369 fused_ordering(1014) 01:21:58.369 fused_ordering(1015) 01:21:58.369 fused_ordering(1016) 01:21:58.369 fused_ordering(1017) 01:21:58.369 fused_ordering(1018) 01:21:58.369 fused_ordering(1019) 01:21:58.369 fused_ordering(1020) 01:21:58.369 fused_ordering(1021) 01:21:58.369 fused_ordering(1022) 01:21:58.369 fused_ordering(1023) 01:21:58.369 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 01:21:58.369 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 01:21:58.369 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 01:21:58.369 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 01:21:58.628 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:21:58.628 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 01:21:58.628 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 01:21:58.628 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:21:58.628 rmmod nvme_tcp 01:21:58.628 rmmod nvme_fabrics 01:21:58.628 rmmod nvme_keyring 01:21:58.628 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:21:58.628 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 01:21:58.628 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 01:21:58.628 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 74830 ']' 01:21:58.628 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 74830 01:21:58.628 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 74830 ']' 01:21:58.628 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 74830 01:21:58.628 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 01:21:58.628 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:21:58.628 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74830 01:21:58.628 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:21:58.628 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:21:58.628 killing process with pid 74830 01:21:58.628 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74830' 01:21:58.628 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 74830 01:21:58.628 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 74830 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:21:58.889 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0 01:21:59.154 01:21:59.154 real 0m3.883s 01:21:59.154 user 0m4.506s 01:21:59.154 sys 0m1.098s 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 01:21:59.154 ************************************ 01:21:59.154 END TEST nvmf_fused_ordering 01:21:59.154 ************************************ 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:21:59.154 ************************************ 01:21:59.154 START TEST nvmf_ns_masking 01:21:59.154 ************************************ 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 01:21:59.154 * Looking for test storage... 01:21:59.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:21:59.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:59.154 --rc genhtml_branch_coverage=1 01:21:59.154 --rc genhtml_function_coverage=1 01:21:59.154 --rc genhtml_legend=1 01:21:59.154 --rc geninfo_all_blocks=1 01:21:59.154 --rc geninfo_unexecuted_blocks=1 01:21:59.154 01:21:59.154 ' 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:21:59.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:59.154 --rc genhtml_branch_coverage=1 01:21:59.154 --rc genhtml_function_coverage=1 01:21:59.154 --rc genhtml_legend=1 01:21:59.154 --rc geninfo_all_blocks=1 01:21:59.154 --rc geninfo_unexecuted_blocks=1 01:21:59.154 01:21:59.154 ' 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:21:59.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:59.154 --rc genhtml_branch_coverage=1 01:21:59.154 --rc genhtml_function_coverage=1 01:21:59.154 --rc genhtml_legend=1 01:21:59.154 --rc geninfo_all_blocks=1 01:21:59.154 --rc geninfo_unexecuted_blocks=1 01:21:59.154 01:21:59.154 ' 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:21:59.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:21:59.154 --rc genhtml_branch_coverage=1 01:21:59.154 --rc genhtml_function_coverage=1 01:21:59.154 --rc genhtml_legend=1 01:21:59.154 --rc geninfo_all_blocks=1 01:21:59.154 --rc geninfo_unexecuted_blocks=1 01:21:59.154 01:21:59.154 ' 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:21:59.154 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:21:59.155 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=f3da334d-11fb-4578-9c9d-13630c4dcc72 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=4c0de547-b6c4-4716-9e54-33ea1df960df 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=04c27554-2aef-43fe-a807-d9c6d4fd7152 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@460 -- # nvmf_veth_init 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:21:59.155 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:21:59.421 Cannot find device "nvmf_init_br" 01:21:59.421 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 01:21:59.421 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:21:59.421 Cannot find device "nvmf_init_br2" 01:21:59.421 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 01:21:59.421 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:21:59.421 Cannot find device "nvmf_tgt_br" 01:21:59.421 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true 01:21:59.421 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:21:59.421 Cannot find device "nvmf_tgt_br2" 01:21:59.421 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true 01:21:59.421 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:21:59.421 Cannot find device "nvmf_init_br" 01:21:59.421 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:21:59.422 Cannot find device "nvmf_init_br2" 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:21:59.422 Cannot find device "nvmf_tgt_br" 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:21:59.422 Cannot find device "nvmf_tgt_br2" 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:21:59.422 Cannot find device "nvmf_br" 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:21:59.422 Cannot find device "nvmf_init_if" 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:21:59.422 Cannot find device "nvmf_init_if2" 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:21:59.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:21:59.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:21:59.422 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:21:59.422 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:21:59.422 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:21:59.422 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:21:59.422 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:21:59.422 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:21:59.756 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:21:59.756 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 01:21:59.756 01:21:59.756 --- 10.0.0.3 ping statistics --- 01:21:59.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:59.756 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:21:59.756 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:21:59.756 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 01:21:59.756 01:21:59.756 --- 10.0.0.4 ping statistics --- 01:21:59.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:59.756 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:21:59.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:21:59.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 01:21:59.756 01:21:59.756 --- 10.0.0.1 ping statistics --- 01:21:59.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:59.756 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:21:59.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:21:59.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 01:21:59.756 01:21:59.756 --- 10.0.0.2 ping statistics --- 01:21:59.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:21:59.756 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@461 -- # return 0 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=75120 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 75120 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 75120 ']' 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:21:59.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 01:21:59.756 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 01:21:59.756 [2024-12-09 05:16:51.163495] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:21:59.756 [2024-12-09 05:16:51.163587] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:21:59.756 [2024-12-09 05:16:51.303201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:21:59.756 [2024-12-09 05:16:51.348087] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:21:59.756 [2024-12-09 05:16:51.348133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:21:59.756 [2024-12-09 05:16:51.348140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:21:59.756 [2024-12-09 05:16:51.348145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:21:59.756 [2024-12-09 05:16:51.348150] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:21:59.756 [2024-12-09 05:16:51.348435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:22:00.696 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:22:00.696 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 01:22:00.696 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:22:00.696 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 01:22:00.696 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 01:22:00.696 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:22:00.696 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:22:00.696 [2024-12-09 05:16:52.305611] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:22:00.957 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 01:22:00.957 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 01:22:00.957 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 01:22:00.957 Malloc1 01:22:00.957 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 01:22:01.218 Malloc2 01:22:01.218 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:22:01.481 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 01:22:01.743 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:22:02.005 [2024-12-09 05:16:53.437739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:22:02.005 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 01:22:02.005 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 04c27554-2aef-43fe-a807-d9c6d4fd7152 -a 10.0.0.3 -s 4420 -i 4 01:22:02.005 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 01:22:02.005 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 01:22:02.005 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:22:02.005 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:22:02.005 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:22:04.538 [ 0]:0x1 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0ed1645a591647e69052ee292bb21087 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0ed1645a591647e69052ee292bb21087 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:22:04.538 [ 0]:0x1 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0ed1645a591647e69052ee292bb21087 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0ed1645a591647e69052ee292bb21087 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 01:22:04.538 [ 1]:0x2 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c2016aed09d0456094649ab538a96dff 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c2016aed09d0456094649ab538a96dff != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:22:04.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:22:04.538 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:22:04.796 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 01:22:04.796 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 01:22:04.796 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 04c27554-2aef-43fe-a807-d9c6d4fd7152 -a 10.0.0.3 -s 4420 -i 4 01:22:05.055 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 01:22:05.055 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 01:22:05.055 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:22:05.055 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 01:22:05.055 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 01:22:05.055 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 01:22:06.952 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:22:06.952 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:22:06.952 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:22:06.952 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:22:06.952 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:22:06.952 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 01:22:06.952 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 01:22:06.952 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 01:22:06.952 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 01:22:06.952 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 01:22:06.952 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 01:22:06.952 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 01:22:06.952 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 01:22:06.952 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 01:22:06.952 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:06.952 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 01:22:06.953 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:06.953 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 01:22:06.953 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 01:22:06.953 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:22:06.953 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 01:22:06.953 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:22:07.210 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 01:22:07.210 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:22:07.210 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 01:22:07.210 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:22:07.210 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:22:07.210 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:22:07.210 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 01:22:07.210 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:22:07.210 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 01:22:07.210 [ 0]:0x2 01:22:07.210 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 01:22:07.210 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:22:07.210 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c2016aed09d0456094649ab538a96dff 01:22:07.210 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c2016aed09d0456094649ab538a96dff != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:22:07.210 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 01:22:07.467 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 01:22:07.467 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:22:07.467 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 01:22:07.467 [ 0]:0x1 01:22:07.467 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 01:22:07.467 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:22:07.467 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0ed1645a591647e69052ee292bb21087 01:22:07.467 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0ed1645a591647e69052ee292bb21087 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:22:07.467 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 01:22:07.467 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:22:07.467 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 01:22:07.467 [ 1]:0x2 01:22:07.467 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 01:22:07.467 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:22:07.467 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c2016aed09d0456094649ab538a96dff 01:22:07.467 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c2016aed09d0456094649ab538a96dff != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:22:07.467 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 01:22:07.725 [ 0]:0x2 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c2016aed09d0456094649ab538a96dff 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c2016aed09d0456094649ab538a96dff != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:22:07.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:22:07.725 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 01:22:07.982 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 01:22:07.982 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 04c27554-2aef-43fe-a807-d9c6d4fd7152 -a 10.0.0.3 -s 4420 -i 4 01:22:07.982 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 01:22:07.982 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 01:22:07.982 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:22:07.982 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 01:22:07.982 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 01:22:07.982 05:16:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:22:10.506 [ 0]:0x1 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0ed1645a591647e69052ee292bb21087 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0ed1645a591647e69052ee292bb21087 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:22:10.506 [ 1]:0x2 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c2016aed09d0456094649ab538a96dff 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c2016aed09d0456094649ab538a96dff != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:22:10.506 [ 0]:0x2 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c2016aed09d0456094649ab538a96dff 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c2016aed09d0456094649ab538a96dff != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:22:10.506 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 01:22:10.506 [2024-12-09 05:17:02.117212] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 01:22:10.764 2024/12/09 05:17:02 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 01:22:10.764 request: 01:22:10.764 { 01:22:10.764 "method": "nvmf_ns_remove_host", 01:22:10.764 "params": { 01:22:10.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:22:10.764 "nsid": 2, 01:22:10.764 "host": "nqn.2016-06.io.spdk:host1" 01:22:10.764 } 01:22:10.765 } 01:22:10.765 Got JSON-RPC error response 01:22:10.765 GoRPCClient: error on JSON-RPC call 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 01:22:10.765 [ 0]:0x2 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c2016aed09d0456094649ab538a96dff 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c2016aed09d0456094649ab538a96dff != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:22:10.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=75480 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 75480 /var/tmp/host.sock 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 75480 ']' 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 01:22:10.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 01:22:10.765 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 01:22:10.765 [2024-12-09 05:17:02.300258] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:22:10.765 [2024-12-09 05:17:02.300419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75480 ] 01:22:11.024 [2024-12-09 05:17:02.437309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:22:11.025 [2024-12-09 05:17:02.482591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:22:11.958 05:17:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:22:11.958 05:17:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 01:22:11.958 05:17:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:22:11.958 05:17:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:22:12.216 05:17:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid f3da334d-11fb-4578-9c9d-13630c4dcc72 01:22:12.216 05:17:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 01:22:12.216 05:17:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F3DA334D11FB45789C9D13630C4DCC72 -i 01:22:12.475 05:17:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 4c0de547-b6c4-4716-9e54-33ea1df960df 01:22:12.475 05:17:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 01:22:12.475 05:17:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 4C0DE547B6C447169E5433EA1DF960DF -i 01:22:12.475 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 01:22:12.733 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 01:22:12.992 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 01:22:12.992 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 01:22:13.250 nvme0n1 01:22:13.250 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 01:22:13.250 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 01:22:13.509 nvme1n2 01:22:13.509 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 01:22:13.509 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 01:22:13.509 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 01:22:13.509 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 01:22:13.509 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 01:22:13.767 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 01:22:13.767 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 01:22:13.767 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 01:22:13.767 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 01:22:14.024 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ f3da334d-11fb-4578-9c9d-13630c4dcc72 == \f\3\d\a\3\3\4\d\-\1\1\f\b\-\4\5\7\8\-\9\c\9\d\-\1\3\6\3\0\c\4\d\c\c\7\2 ]] 01:22:14.024 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 01:22:14.024 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 01:22:14.024 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 01:22:14.024 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 4c0de547-b6c4-4716-9e54-33ea1df960df == \4\c\0\d\e\5\4\7\-\b\6\c\4\-\4\7\1\6\-\9\e\5\4\-\3\3\e\a\1\d\f\9\6\0\d\f ]] 01:22:14.024 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:22:14.282 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:22:14.539 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid f3da334d-11fb-4578-9c9d-13630c4dcc72 01:22:14.539 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 01:22:14.540 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F3DA334D11FB45789C9D13630C4DCC72 01:22:14.540 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 01:22:14.540 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F3DA334D11FB45789C9D13630C4DCC72 01:22:14.540 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:22:14.540 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:14.540 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:22:14.540 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:14.540 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:22:14.540 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:14.540 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:22:14.540 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:22:14.540 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F3DA334D11FB45789C9D13630C4DCC72 01:22:14.797 [2024-12-09 05:17:06.226365] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 01:22:14.797 [2024-12-09 05:17:06.226399] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 01:22:14.797 [2024-12-09 05:17:06.226405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:22:14.797 2024/12/09 05:17:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:invalid hide_metadata:%!s(bool=false) nguid:F3DA334D11FB45789C9D13630C4DCC72 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:22:14.797 request: 01:22:14.797 { 01:22:14.797 "method": "nvmf_subsystem_add_ns", 01:22:14.797 "params": { 01:22:14.797 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:22:14.797 "namespace": { 01:22:14.797 "bdev_name": "invalid", 01:22:14.797 "nsid": 1, 01:22:14.797 "nguid": "F3DA334D11FB45789C9D13630C4DCC72", 01:22:14.798 "no_auto_visible": false, 01:22:14.798 "hide_metadata": false 01:22:14.798 } 01:22:14.798 } 01:22:14.798 } 01:22:14.798 Got JSON-RPC error response 01:22:14.798 GoRPCClient: error on JSON-RPC call 01:22:14.798 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 01:22:14.798 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:22:14.798 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:22:14.798 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:22:14.798 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid f3da334d-11fb-4578-9c9d-13630c4dcc72 01:22:14.798 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 01:22:14.798 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F3DA334D11FB45789C9D13630C4DCC72 -i 01:22:15.056 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 01:22:16.957 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 01:22:16.957 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 01:22:16.957 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 01:22:17.214 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 01:22:17.214 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 75480 01:22:17.214 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 75480 ']' 01:22:17.214 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 75480 01:22:17.214 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 01:22:17.214 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:22:17.214 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75480 01:22:17.214 killing process with pid 75480 01:22:17.214 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:22:17.214 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:22:17.214 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75480' 01:22:17.214 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 75480 01:22:17.214 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 75480 01:22:17.471 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:22:17.729 rmmod nvme_tcp 01:22:17.729 rmmod nvme_fabrics 01:22:17.729 rmmod nvme_keyring 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 75120 ']' 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 75120 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 75120 ']' 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 75120 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75120 01:22:17.729 killing process with pid 75120 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75120' 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 75120 01:22:17.729 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 75120 01:22:17.988 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:22:17.988 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:22:17.988 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:22:17.988 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 01:22:17.988 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 01:22:17.988 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:22:17.988 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 01:22:17.988 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:22:17.988 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:22:17.988 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:22:17.988 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:22:17.988 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:22:17.988 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:22:17.988 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:22:17.988 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:22:17.988 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:22:17.988 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:22:17.988 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:22:17.988 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:22:17.988 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:22:17.988 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:22:17.988 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0 01:22:18.248 01:22:18.248 real 0m19.066s 01:22:18.248 user 0m31.110s 01:22:18.248 sys 0m2.332s 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 01:22:18.248 ************************************ 01:22:18.248 END TEST nvmf_ns_masking 01:22:18.248 ************************************ 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:22:18.248 ************************************ 01:22:18.248 START TEST nvmf_auth_target 01:22:18.248 ************************************ 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 01:22:18.248 * Looking for test storage... 01:22:18.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:22:18.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:22:18.248 --rc genhtml_branch_coverage=1 01:22:18.248 --rc genhtml_function_coverage=1 01:22:18.248 --rc genhtml_legend=1 01:22:18.248 --rc geninfo_all_blocks=1 01:22:18.248 --rc geninfo_unexecuted_blocks=1 01:22:18.248 01:22:18.248 ' 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:22:18.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:22:18.248 --rc genhtml_branch_coverage=1 01:22:18.248 --rc genhtml_function_coverage=1 01:22:18.248 --rc genhtml_legend=1 01:22:18.248 --rc geninfo_all_blocks=1 01:22:18.248 --rc geninfo_unexecuted_blocks=1 01:22:18.248 01:22:18.248 ' 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:22:18.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:22:18.248 --rc genhtml_branch_coverage=1 01:22:18.248 --rc genhtml_function_coverage=1 01:22:18.248 --rc genhtml_legend=1 01:22:18.248 --rc geninfo_all_blocks=1 01:22:18.248 --rc geninfo_unexecuted_blocks=1 01:22:18.248 01:22:18.248 ' 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:22:18.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:22:18.248 --rc genhtml_branch_coverage=1 01:22:18.248 --rc genhtml_function_coverage=1 01:22:18.248 --rc genhtml_legend=1 01:22:18.248 --rc geninfo_all_blocks=1 01:22:18.248 --rc geninfo_unexecuted_blocks=1 01:22:18.248 01:22:18.248 ' 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:22:18.248 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:22:18.248 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:22:18.249 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:22:18.249 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:22:18.249 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:22:18.249 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:22:18.249 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:22:18.249 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:22:18.507 Cannot find device "nvmf_init_br" 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:22:18.507 Cannot find device "nvmf_init_br2" 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:22:18.507 Cannot find device "nvmf_tgt_br" 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:22:18.507 Cannot find device "nvmf_tgt_br2" 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:22:18.507 Cannot find device "nvmf_init_br" 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:22:18.507 Cannot find device "nvmf_init_br2" 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:22:18.507 Cannot find device "nvmf_tgt_br" 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:22:18.507 Cannot find device "nvmf_tgt_br2" 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:22:18.507 Cannot find device "nvmf_br" 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:22:18.507 Cannot find device "nvmf_init_if" 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:22:18.507 Cannot find device "nvmf_init_if2" 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:22:18.507 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:22:18.507 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:22:18.507 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:22:18.507 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:22:18.507 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:22:18.507 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:22:18.507 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:22:18.507 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:22:18.507 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:22:18.507 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:22:18.507 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:22:18.507 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:22:18.507 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:22:18.507 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:22:18.507 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:22:18.507 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:22:18.507 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:22:18.507 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:22:18.507 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:22:18.507 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:22:18.507 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:22:18.507 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:22:18.507 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:22:18.507 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:22:18.764 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:22:18.764 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:22:18.764 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:22:18.764 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:22:18.764 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:22:18.764 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:22:18.764 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:22:18.764 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:22:18.764 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:22:18.764 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 01:22:18.764 01:22:18.764 --- 10.0.0.3 ping statistics --- 01:22:18.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:22:18.764 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 01:22:18.764 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:22:18.764 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:22:18.764 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 01:22:18.764 01:22:18.764 --- 10.0.0.4 ping statistics --- 01:22:18.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:22:18.764 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 01:22:18.764 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:22:18.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:22:18.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 01:22:18.764 01:22:18.765 --- 10.0.0.1 ping statistics --- 01:22:18.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:22:18.765 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:22:18.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:22:18.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 01:22:18.765 01:22:18.765 --- 10.0.0.2 ping statistics --- 01:22:18.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:22:18.765 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=75963 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 75963 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 75963 ']' 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:22:18.765 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=76007 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c210db41eb94a35c3a97c555c83134ce1f5f41dd1b2d4653 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.aSf 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c210db41eb94a35c3a97c555c83134ce1f5f41dd1b2d4653 0 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c210db41eb94a35c3a97c555c83134ce1f5f41dd1b2d4653 0 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c210db41eb94a35c3a97c555c83134ce1f5f41dd1b2d4653 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.aSf 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.aSf 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.aSf 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3f7821573291343faab9316a8016770103b3b049eefdfb1b05c7a46092dff450 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.wtm 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3f7821573291343faab9316a8016770103b3b049eefdfb1b05c7a46092dff450 3 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3f7821573291343faab9316a8016770103b3b049eefdfb1b05c7a46092dff450 3 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3f7821573291343faab9316a8016770103b3b049eefdfb1b05c7a46092dff450 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.wtm 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.wtm 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.wtm 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f1961c28d451df5b40095eead75dd34e 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.O4c 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f1961c28d451df5b40095eead75dd34e 1 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f1961c28d451df5b40095eead75dd34e 1 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f1961c28d451df5b40095eead75dd34e 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.O4c 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.O4c 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.O4c 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 01:22:19.698 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5087d321fab18754c7c409b99d79e275ed73460767d50e92 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.UQ9 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5087d321fab18754c7c409b99d79e275ed73460767d50e92 2 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5087d321fab18754c7c409b99d79e275ed73460767d50e92 2 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5087d321fab18754c7c409b99d79e275ed73460767d50e92 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.UQ9 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.UQ9 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.UQ9 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=af76dc5eea4b86d3c0a3317ead9998af1b3e32fcddfe756a 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.bWT 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key af76dc5eea4b86d3c0a3317ead9998af1b3e32fcddfe756a 2 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 af76dc5eea4b86d3c0a3317ead9998af1b3e32fcddfe756a 2 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=af76dc5eea4b86d3c0a3317ead9998af1b3e32fcddfe756a 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.bWT 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.bWT 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.bWT 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b57eaa4922c5a952531acd821834a52e 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ks1 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b57eaa4922c5a952531acd821834a52e 1 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b57eaa4922c5a952531acd821834a52e 1 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b57eaa4922c5a952531acd821834a52e 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ks1 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ks1 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.ks1 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=13f782819a5e0e29f0852fecb56833f4d78781f1e9da643dc9a10418e7785c07 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.psF 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 13f782819a5e0e29f0852fecb56833f4d78781f1e9da643dc9a10418e7785c07 3 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 13f782819a5e0e29f0852fecb56833f4d78781f1e9da643dc9a10418e7785c07 3 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=13f782819a5e0e29f0852fecb56833f4d78781f1e9da643dc9a10418e7785c07 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.psF 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.psF 01:22:19.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.psF 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 75963 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 75963 ']' 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:22:19.957 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:20.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 01:22:20.215 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:22:20.215 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 01:22:20.215 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 76007 /var/tmp/host.sock 01:22:20.215 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76007 ']' 01:22:20.215 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 01:22:20.215 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:22:20.215 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 01:22:20.215 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:22:20.215 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:20.474 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:22:20.474 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 01:22:20.474 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 01:22:20.474 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:20.474 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:20.474 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:20.474 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 01:22:20.474 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.aSf 01:22:20.474 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:20.474 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:20.474 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:20.474 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.aSf 01:22:20.474 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.aSf 01:22:20.732 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.wtm ]] 01:22:20.732 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wtm 01:22:20.732 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:20.732 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:20.732 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:20.732 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wtm 01:22:20.732 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wtm 01:22:20.990 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 01:22:20.990 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.O4c 01:22:20.990 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:20.990 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:20.990 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:20.990 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.O4c 01:22:20.990 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.O4c 01:22:20.990 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.UQ9 ]] 01:22:20.990 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.UQ9 01:22:20.990 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:20.990 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:20.990 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:20.990 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.UQ9 01:22:20.990 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.UQ9 01:22:21.248 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 01:22:21.248 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.bWT 01:22:21.248 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:21.248 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:21.248 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:21.248 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.bWT 01:22:21.248 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.bWT 01:22:21.505 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.ks1 ]] 01:22:21.505 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ks1 01:22:21.505 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:21.505 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:21.505 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:21.505 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ks1 01:22:21.505 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ks1 01:22:21.763 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 01:22:21.763 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.psF 01:22:21.763 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:21.763 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:21.763 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:21.763 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.psF 01:22:21.763 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.psF 01:22:22.021 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 01:22:22.021 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 01:22:22.021 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:22:22.021 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:22:22.021 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:22:22.021 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:22:22.021 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 01:22:22.021 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:22.021 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:22:22.021 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:22:22.021 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:22:22.021 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:22.021 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:22.021 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:22.021 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:22.280 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:22.280 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:22.280 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:22.280 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:22.280 01:22:22.539 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:22.539 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:22.539 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:22.539 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:22.539 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:22.539 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:22.539 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:22.539 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:22.539 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:22.539 { 01:22:22.539 "auth": { 01:22:22.539 "dhgroup": "null", 01:22:22.539 "digest": "sha256", 01:22:22.539 "state": "completed" 01:22:22.539 }, 01:22:22.539 "cntlid": 1, 01:22:22.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:22:22.539 "listen_address": { 01:22:22.539 "adrfam": "IPv4", 01:22:22.539 "traddr": "10.0.0.3", 01:22:22.539 "trsvcid": "4420", 01:22:22.539 "trtype": "TCP" 01:22:22.539 }, 01:22:22.539 "peer_address": { 01:22:22.539 "adrfam": "IPv4", 01:22:22.539 "traddr": "10.0.0.1", 01:22:22.539 "trsvcid": "59332", 01:22:22.539 "trtype": "TCP" 01:22:22.539 }, 01:22:22.539 "qid": 0, 01:22:22.539 "state": "enabled", 01:22:22.539 "thread": "nvmf_tgt_poll_group_000" 01:22:22.539 } 01:22:22.539 ]' 01:22:22.539 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:22.797 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:22:22.798 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:22.798 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:22:22.798 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:22.798 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:22.798 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:22.798 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:23.057 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:22:23.057 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:22:27.278 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:27.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:27.278 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:27.278 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:27.278 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:27.279 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:27.279 { 01:22:27.279 "auth": { 01:22:27.279 "dhgroup": "null", 01:22:27.279 "digest": "sha256", 01:22:27.279 "state": "completed" 01:22:27.279 }, 01:22:27.279 "cntlid": 3, 01:22:27.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:22:27.279 "listen_address": { 01:22:27.279 "adrfam": "IPv4", 01:22:27.279 "traddr": "10.0.0.3", 01:22:27.279 "trsvcid": "4420", 01:22:27.279 "trtype": "TCP" 01:22:27.279 }, 01:22:27.279 "peer_address": { 01:22:27.279 "adrfam": "IPv4", 01:22:27.279 "traddr": "10.0.0.1", 01:22:27.279 "trsvcid": "59358", 01:22:27.279 "trtype": "TCP" 01:22:27.279 }, 01:22:27.279 "qid": 0, 01:22:27.279 "state": "enabled", 01:22:27.279 "thread": "nvmf_tgt_poll_group_000" 01:22:27.279 } 01:22:27.279 ]' 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:27.279 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:27.536 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:22:27.536 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:22:28.101 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:28.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:28.101 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:28.101 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:28.101 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:28.101 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:28.101 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:22:28.101 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:22:28.101 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:22:28.359 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 01:22:28.359 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:28.359 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:22:28.359 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:22:28.359 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:22:28.359 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:28.359 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:22:28.359 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:28.359 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:28.359 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:28.359 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:22:28.359 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:22:28.359 05:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:22:28.616 01:22:28.616 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:28.616 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:28.616 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:28.873 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:28.873 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:28.873 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:28.873 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:28.873 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:28.873 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:28.873 { 01:22:28.873 "auth": { 01:22:28.873 "dhgroup": "null", 01:22:28.873 "digest": "sha256", 01:22:28.873 "state": "completed" 01:22:28.873 }, 01:22:28.873 "cntlid": 5, 01:22:28.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:22:28.873 "listen_address": { 01:22:28.873 "adrfam": "IPv4", 01:22:28.873 "traddr": "10.0.0.3", 01:22:28.873 "trsvcid": "4420", 01:22:28.873 "trtype": "TCP" 01:22:28.873 }, 01:22:28.873 "peer_address": { 01:22:28.873 "adrfam": "IPv4", 01:22:28.873 "traddr": "10.0.0.1", 01:22:28.873 "trsvcid": "35852", 01:22:28.873 "trtype": "TCP" 01:22:28.873 }, 01:22:28.873 "qid": 0, 01:22:28.873 "state": "enabled", 01:22:28.873 "thread": "nvmf_tgt_poll_group_000" 01:22:28.873 } 01:22:28.873 ]' 01:22:28.873 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:28.873 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:22:28.873 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:28.873 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:22:28.873 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:28.873 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:28.873 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:28.873 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:29.130 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:22:29.130 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:22:29.694 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:29.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:29.694 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:29.694 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:29.694 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:29.694 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:29.694 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:22:29.694 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:22:29.694 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:22:29.951 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 01:22:29.951 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:29.951 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:22:29.951 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:22:29.951 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:22:29.951 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:29.951 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key3 01:22:29.951 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:29.951 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:29.951 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:29.951 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:22:29.951 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:22:29.951 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:22:30.209 01:22:30.209 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:30.209 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:30.209 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:30.209 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:30.209 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:30.209 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:30.209 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:30.466 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:30.466 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:30.466 { 01:22:30.466 "auth": { 01:22:30.466 "dhgroup": "null", 01:22:30.466 "digest": "sha256", 01:22:30.466 "state": "completed" 01:22:30.466 }, 01:22:30.466 "cntlid": 7, 01:22:30.466 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:22:30.466 "listen_address": { 01:22:30.466 "adrfam": "IPv4", 01:22:30.466 "traddr": "10.0.0.3", 01:22:30.466 "trsvcid": "4420", 01:22:30.466 "trtype": "TCP" 01:22:30.466 }, 01:22:30.466 "peer_address": { 01:22:30.466 "adrfam": "IPv4", 01:22:30.466 "traddr": "10.0.0.1", 01:22:30.466 "trsvcid": "35876", 01:22:30.466 "trtype": "TCP" 01:22:30.466 }, 01:22:30.466 "qid": 0, 01:22:30.466 "state": "enabled", 01:22:30.466 "thread": "nvmf_tgt_poll_group_000" 01:22:30.466 } 01:22:30.466 ]' 01:22:30.466 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:30.466 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:22:30.466 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:30.466 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:22:30.466 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:30.466 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:30.466 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:30.466 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:30.723 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:22:30.723 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:22:31.289 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:31.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:31.289 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:31.289 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:31.289 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:31.289 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:31.289 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:22:31.289 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:22:31.289 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:22:31.289 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:22:31.289 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 01:22:31.289 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:31.289 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:22:31.289 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:22:31.289 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:22:31.289 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:31.289 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:31.289 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:31.289 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:31.548 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:31.548 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:31.548 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:31.548 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:31.548 01:22:31.806 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:31.806 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:31.806 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:31.806 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:31.806 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:31.806 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:31.806 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:31.806 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:31.806 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:31.807 { 01:22:31.807 "auth": { 01:22:31.807 "dhgroup": "ffdhe2048", 01:22:31.807 "digest": "sha256", 01:22:31.807 "state": "completed" 01:22:31.807 }, 01:22:31.807 "cntlid": 9, 01:22:31.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:22:31.807 "listen_address": { 01:22:31.807 "adrfam": "IPv4", 01:22:31.807 "traddr": "10.0.0.3", 01:22:31.807 "trsvcid": "4420", 01:22:31.807 "trtype": "TCP" 01:22:31.807 }, 01:22:31.807 "peer_address": { 01:22:31.807 "adrfam": "IPv4", 01:22:31.807 "traddr": "10.0.0.1", 01:22:31.807 "trsvcid": "35896", 01:22:31.807 "trtype": "TCP" 01:22:31.807 }, 01:22:31.807 "qid": 0, 01:22:31.807 "state": "enabled", 01:22:31.807 "thread": "nvmf_tgt_poll_group_000" 01:22:31.807 } 01:22:31.807 ]' 01:22:31.807 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:32.065 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:22:32.065 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:32.065 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:22:32.065 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:32.065 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:32.065 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:32.065 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:32.323 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:22:32.323 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:22:32.889 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:32.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:32.889 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:32.889 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:32.889 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:32.889 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:32.889 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:22:32.889 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:22:32.889 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:22:32.889 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 01:22:32.889 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:32.889 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:22:32.889 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:22:32.889 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:22:32.889 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:32.889 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:32.889 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:32.889 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:32.889 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:32.889 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:32.889 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:32.889 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:33.147 01:22:33.405 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:33.405 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:33.405 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:33.405 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:33.405 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:33.405 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:33.405 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:33.405 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:33.405 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:33.405 { 01:22:33.405 "auth": { 01:22:33.405 "dhgroup": "ffdhe2048", 01:22:33.405 "digest": "sha256", 01:22:33.405 "state": "completed" 01:22:33.405 }, 01:22:33.405 "cntlid": 11, 01:22:33.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:22:33.405 "listen_address": { 01:22:33.405 "adrfam": "IPv4", 01:22:33.405 "traddr": "10.0.0.3", 01:22:33.405 "trsvcid": "4420", 01:22:33.405 "trtype": "TCP" 01:22:33.405 }, 01:22:33.405 "peer_address": { 01:22:33.405 "adrfam": "IPv4", 01:22:33.405 "traddr": "10.0.0.1", 01:22:33.405 "trsvcid": "35924", 01:22:33.405 "trtype": "TCP" 01:22:33.405 }, 01:22:33.405 "qid": 0, 01:22:33.405 "state": "enabled", 01:22:33.405 "thread": "nvmf_tgt_poll_group_000" 01:22:33.405 } 01:22:33.405 ]' 01:22:33.405 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:33.663 05:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:22:33.663 05:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:33.663 05:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:22:33.663 05:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:33.663 05:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:33.663 05:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:33.663 05:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:33.921 05:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:22:33.921 05:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:22:34.489 05:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:34.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:34.490 05:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:34.490 05:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:34.490 05:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:34.490 05:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:34.490 05:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:22:34.490 05:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:22:34.490 05:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:22:34.750 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 01:22:34.750 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:34.750 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:22:34.750 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:22:34.750 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:22:34.750 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:34.750 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:22:34.750 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:34.750 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:34.750 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:34.750 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:22:34.750 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:22:34.750 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:22:35.008 01:22:35.008 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:35.008 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:35.008 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:35.284 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:35.284 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:35.284 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:35.284 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:35.284 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:35.284 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:35.284 { 01:22:35.284 "auth": { 01:22:35.284 "dhgroup": "ffdhe2048", 01:22:35.284 "digest": "sha256", 01:22:35.284 "state": "completed" 01:22:35.284 }, 01:22:35.284 "cntlid": 13, 01:22:35.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:22:35.284 "listen_address": { 01:22:35.284 "adrfam": "IPv4", 01:22:35.284 "traddr": "10.0.0.3", 01:22:35.284 "trsvcid": "4420", 01:22:35.284 "trtype": "TCP" 01:22:35.284 }, 01:22:35.284 "peer_address": { 01:22:35.284 "adrfam": "IPv4", 01:22:35.284 "traddr": "10.0.0.1", 01:22:35.284 "trsvcid": "35960", 01:22:35.284 "trtype": "TCP" 01:22:35.284 }, 01:22:35.284 "qid": 0, 01:22:35.284 "state": "enabled", 01:22:35.284 "thread": "nvmf_tgt_poll_group_000" 01:22:35.284 } 01:22:35.284 ]' 01:22:35.284 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:35.284 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:22:35.284 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:35.284 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:22:35.284 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:35.284 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:35.284 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:35.284 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:35.562 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:22:35.562 05:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:22:36.128 05:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:36.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:36.128 05:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:36.128 05:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:36.128 05:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:36.128 05:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:36.128 05:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:22:36.128 05:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:22:36.128 05:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:22:36.385 05:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 01:22:36.385 05:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:36.385 05:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:22:36.385 05:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:22:36.385 05:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:22:36.385 05:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:36.385 05:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key3 01:22:36.385 05:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:36.385 05:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:36.385 05:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:36.385 05:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:22:36.385 05:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:22:36.385 05:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:22:36.642 01:22:36.642 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:36.642 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:36.642 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:36.898 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:36.898 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:36.898 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:36.898 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:36.898 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:36.898 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:36.898 { 01:22:36.898 "auth": { 01:22:36.898 "dhgroup": "ffdhe2048", 01:22:36.898 "digest": "sha256", 01:22:36.898 "state": "completed" 01:22:36.898 }, 01:22:36.898 "cntlid": 15, 01:22:36.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:22:36.898 "listen_address": { 01:22:36.898 "adrfam": "IPv4", 01:22:36.898 "traddr": "10.0.0.3", 01:22:36.898 "trsvcid": "4420", 01:22:36.898 "trtype": "TCP" 01:22:36.898 }, 01:22:36.898 "peer_address": { 01:22:36.898 "adrfam": "IPv4", 01:22:36.898 "traddr": "10.0.0.1", 01:22:36.898 "trsvcid": "35984", 01:22:36.898 "trtype": "TCP" 01:22:36.898 }, 01:22:36.898 "qid": 0, 01:22:36.898 "state": "enabled", 01:22:36.898 "thread": "nvmf_tgt_poll_group_000" 01:22:36.898 } 01:22:36.898 ]' 01:22:36.898 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:36.898 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:22:36.898 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:36.898 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:22:36.898 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:36.898 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:36.898 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:36.898 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:37.207 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:22:37.207 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:22:37.772 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:37.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:37.772 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:37.772 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:37.772 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:37.772 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:37.772 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:22:37.772 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:22:37.772 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:22:37.772 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:22:37.772 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 01:22:37.772 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:37.772 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:22:37.772 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:22:37.772 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:22:37.772 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:37.772 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:37.772 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:37.772 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:37.772 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:37.772 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:37.772 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:37.772 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:38.337 01:22:38.337 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:38.337 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:38.337 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:38.337 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:38.337 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:38.337 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:38.337 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:38.337 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:38.337 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:38.337 { 01:22:38.337 "auth": { 01:22:38.337 "dhgroup": "ffdhe3072", 01:22:38.337 "digest": "sha256", 01:22:38.337 "state": "completed" 01:22:38.337 }, 01:22:38.337 "cntlid": 17, 01:22:38.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:22:38.337 "listen_address": { 01:22:38.337 "adrfam": "IPv4", 01:22:38.337 "traddr": "10.0.0.3", 01:22:38.337 "trsvcid": "4420", 01:22:38.337 "trtype": "TCP" 01:22:38.337 }, 01:22:38.337 "peer_address": { 01:22:38.337 "adrfam": "IPv4", 01:22:38.337 "traddr": "10.0.0.1", 01:22:38.337 "trsvcid": "55364", 01:22:38.337 "trtype": "TCP" 01:22:38.337 }, 01:22:38.337 "qid": 0, 01:22:38.337 "state": "enabled", 01:22:38.338 "thread": "nvmf_tgt_poll_group_000" 01:22:38.338 } 01:22:38.338 ]' 01:22:38.338 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:38.338 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:22:38.338 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:38.595 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:22:38.595 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:38.595 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:38.595 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:38.595 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:38.595 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:22:38.595 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:22:39.160 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:39.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:39.160 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:39.160 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:39.160 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:39.160 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:39.160 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:22:39.160 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:22:39.160 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:22:39.418 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 01:22:39.418 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:39.418 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:22:39.418 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:22:39.418 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:22:39.418 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:39.418 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:39.418 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:39.418 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:39.418 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:39.418 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:39.418 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:39.418 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:39.676 01:22:39.676 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:39.676 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:39.676 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:39.934 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:39.934 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:39.934 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:39.934 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:39.934 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:39.934 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:39.934 { 01:22:39.934 "auth": { 01:22:39.934 "dhgroup": "ffdhe3072", 01:22:39.934 "digest": "sha256", 01:22:39.934 "state": "completed" 01:22:39.934 }, 01:22:39.934 "cntlid": 19, 01:22:39.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:22:39.934 "listen_address": { 01:22:39.934 "adrfam": "IPv4", 01:22:39.934 "traddr": "10.0.0.3", 01:22:39.934 "trsvcid": "4420", 01:22:39.934 "trtype": "TCP" 01:22:39.934 }, 01:22:39.934 "peer_address": { 01:22:39.934 "adrfam": "IPv4", 01:22:39.934 "traddr": "10.0.0.1", 01:22:39.934 "trsvcid": "55396", 01:22:39.934 "trtype": "TCP" 01:22:39.934 }, 01:22:39.934 "qid": 0, 01:22:39.934 "state": "enabled", 01:22:39.934 "thread": "nvmf_tgt_poll_group_000" 01:22:39.934 } 01:22:39.934 ]' 01:22:39.934 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:39.934 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:22:39.934 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:40.193 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:22:40.193 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:40.193 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:40.193 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:40.193 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:40.193 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:22:40.193 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:22:40.761 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:41.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:41.020 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:41.020 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:41.020 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:41.020 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:41.020 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:22:41.020 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:22:41.020 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:22:41.020 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 01:22:41.020 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:41.020 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:22:41.020 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:22:41.020 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:22:41.020 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:41.020 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:22:41.020 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:41.020 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:41.020 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:41.020 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:22:41.020 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:22:41.020 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:22:41.279 01:22:41.279 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:41.279 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:41.279 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:41.537 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:41.537 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:41.537 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:41.537 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:41.537 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:41.537 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:41.537 { 01:22:41.537 "auth": { 01:22:41.537 "dhgroup": "ffdhe3072", 01:22:41.537 "digest": "sha256", 01:22:41.537 "state": "completed" 01:22:41.537 }, 01:22:41.537 "cntlid": 21, 01:22:41.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:22:41.537 "listen_address": { 01:22:41.537 "adrfam": "IPv4", 01:22:41.537 "traddr": "10.0.0.3", 01:22:41.537 "trsvcid": "4420", 01:22:41.537 "trtype": "TCP" 01:22:41.537 }, 01:22:41.537 "peer_address": { 01:22:41.537 "adrfam": "IPv4", 01:22:41.537 "traddr": "10.0.0.1", 01:22:41.537 "trsvcid": "55424", 01:22:41.537 "trtype": "TCP" 01:22:41.537 }, 01:22:41.537 "qid": 0, 01:22:41.537 "state": "enabled", 01:22:41.537 "thread": "nvmf_tgt_poll_group_000" 01:22:41.537 } 01:22:41.537 ]' 01:22:41.537 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:41.537 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:22:41.537 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:41.796 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:22:41.796 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:41.796 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:41.796 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:41.796 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:42.054 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:22:42.054 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:22:42.621 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:42.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:42.621 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:42.621 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:42.621 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:42.621 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:42.621 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:22:42.621 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:22:42.621 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:22:42.621 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 01:22:42.621 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:42.621 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:22:42.621 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:22:42.621 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:22:42.621 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:42.621 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key3 01:22:42.621 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:42.621 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:42.621 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:42.621 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:22:42.621 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:22:42.621 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:22:42.879 01:22:43.138 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:43.138 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:43.138 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:43.138 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:43.138 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:43.138 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:43.138 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:43.138 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:43.138 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:43.138 { 01:22:43.138 "auth": { 01:22:43.138 "dhgroup": "ffdhe3072", 01:22:43.138 "digest": "sha256", 01:22:43.138 "state": "completed" 01:22:43.138 }, 01:22:43.138 "cntlid": 23, 01:22:43.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:22:43.138 "listen_address": { 01:22:43.138 "adrfam": "IPv4", 01:22:43.138 "traddr": "10.0.0.3", 01:22:43.138 "trsvcid": "4420", 01:22:43.138 "trtype": "TCP" 01:22:43.138 }, 01:22:43.138 "peer_address": { 01:22:43.138 "adrfam": "IPv4", 01:22:43.138 "traddr": "10.0.0.1", 01:22:43.138 "trsvcid": "55446", 01:22:43.138 "trtype": "TCP" 01:22:43.138 }, 01:22:43.138 "qid": 0, 01:22:43.138 "state": "enabled", 01:22:43.138 "thread": "nvmf_tgt_poll_group_000" 01:22:43.138 } 01:22:43.138 ]' 01:22:43.138 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:43.396 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:22:43.397 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:43.397 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:22:43.397 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:43.397 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:43.397 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:43.397 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:43.662 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:22:43.662 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:22:44.250 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:44.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:44.250 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:44.250 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:44.250 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:44.250 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:44.250 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:22:44.250 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:22:44.250 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:22:44.250 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:22:44.250 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 01:22:44.250 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:44.250 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:22:44.250 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:22:44.250 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:22:44.250 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:44.250 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:44.250 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:44.250 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:44.250 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:44.250 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:44.250 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:44.250 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:44.507 01:22:44.765 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:44.765 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:44.765 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:44.765 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:44.765 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:44.765 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:44.765 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:44.765 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:44.765 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:44.765 { 01:22:44.765 "auth": { 01:22:44.765 "dhgroup": "ffdhe4096", 01:22:44.765 "digest": "sha256", 01:22:44.765 "state": "completed" 01:22:44.765 }, 01:22:44.765 "cntlid": 25, 01:22:44.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:22:44.765 "listen_address": { 01:22:44.765 "adrfam": "IPv4", 01:22:44.765 "traddr": "10.0.0.3", 01:22:44.765 "trsvcid": "4420", 01:22:44.765 "trtype": "TCP" 01:22:44.765 }, 01:22:44.765 "peer_address": { 01:22:44.765 "adrfam": "IPv4", 01:22:44.765 "traddr": "10.0.0.1", 01:22:44.765 "trsvcid": "55478", 01:22:44.765 "trtype": "TCP" 01:22:44.765 }, 01:22:44.765 "qid": 0, 01:22:44.765 "state": "enabled", 01:22:44.765 "thread": "nvmf_tgt_poll_group_000" 01:22:44.765 } 01:22:44.765 ]' 01:22:44.765 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:44.765 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:22:44.765 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:45.021 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:22:45.021 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:45.021 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:45.021 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:45.021 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:45.278 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:22:45.278 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:22:45.843 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:45.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:45.843 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:45.843 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:45.843 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:45.843 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:45.843 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:22:45.843 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:22:45.843 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:22:45.843 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 01:22:45.843 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:45.843 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:22:45.843 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:22:45.843 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:22:45.843 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:45.844 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:45.844 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:45.844 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:46.101 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:46.101 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:46.101 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:46.101 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:46.359 01:22:46.359 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:46.359 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:46.359 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:46.617 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:46.617 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:46.617 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:46.617 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:46.617 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:46.617 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:46.617 { 01:22:46.617 "auth": { 01:22:46.617 "dhgroup": "ffdhe4096", 01:22:46.617 "digest": "sha256", 01:22:46.617 "state": "completed" 01:22:46.617 }, 01:22:46.617 "cntlid": 27, 01:22:46.617 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:22:46.617 "listen_address": { 01:22:46.617 "adrfam": "IPv4", 01:22:46.617 "traddr": "10.0.0.3", 01:22:46.617 "trsvcid": "4420", 01:22:46.617 "trtype": "TCP" 01:22:46.617 }, 01:22:46.617 "peer_address": { 01:22:46.617 "adrfam": "IPv4", 01:22:46.617 "traddr": "10.0.0.1", 01:22:46.617 "trsvcid": "55506", 01:22:46.617 "trtype": "TCP" 01:22:46.617 }, 01:22:46.617 "qid": 0, 01:22:46.617 "state": "enabled", 01:22:46.617 "thread": "nvmf_tgt_poll_group_000" 01:22:46.617 } 01:22:46.617 ]' 01:22:46.617 05:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:46.617 05:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:22:46.617 05:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:46.617 05:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:22:46.617 05:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:46.617 05:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:46.617 05:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:46.617 05:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:46.875 05:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:22:46.875 05:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:22:47.481 05:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:47.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:47.481 05:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:47.481 05:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:47.481 05:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:47.481 05:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:47.481 05:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:22:47.481 05:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:22:47.481 05:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:22:47.481 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 01:22:47.481 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:47.481 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:22:47.481 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:22:47.481 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:22:47.481 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:47.481 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:22:47.481 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:47.481 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:47.481 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:47.481 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:22:47.481 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:22:47.481 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:22:48.046 01:22:48.046 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:48.046 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:48.047 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:48.047 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:48.047 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:48.047 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:48.047 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:48.047 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:48.047 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:48.047 { 01:22:48.047 "auth": { 01:22:48.047 "dhgroup": "ffdhe4096", 01:22:48.047 "digest": "sha256", 01:22:48.047 "state": "completed" 01:22:48.047 }, 01:22:48.047 "cntlid": 29, 01:22:48.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:22:48.047 "listen_address": { 01:22:48.047 "adrfam": "IPv4", 01:22:48.047 "traddr": "10.0.0.3", 01:22:48.047 "trsvcid": "4420", 01:22:48.047 "trtype": "TCP" 01:22:48.047 }, 01:22:48.047 "peer_address": { 01:22:48.047 "adrfam": "IPv4", 01:22:48.047 "traddr": "10.0.0.1", 01:22:48.047 "trsvcid": "40926", 01:22:48.047 "trtype": "TCP" 01:22:48.047 }, 01:22:48.047 "qid": 0, 01:22:48.047 "state": "enabled", 01:22:48.047 "thread": "nvmf_tgt_poll_group_000" 01:22:48.047 } 01:22:48.047 ]' 01:22:48.047 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:48.047 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:22:48.047 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:48.304 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:22:48.304 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:48.304 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:48.304 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:48.304 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:48.562 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:22:48.562 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:22:49.127 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:49.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:49.127 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:49.127 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:49.127 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:49.127 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:49.127 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:22:49.127 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:22:49.127 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:22:49.127 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 01:22:49.127 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:49.127 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:22:49.127 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:22:49.127 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:22:49.127 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:49.127 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key3 01:22:49.127 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:49.127 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:49.127 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:49.128 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:22:49.128 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:22:49.128 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:22:49.694 01:22:49.694 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:49.694 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:49.694 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:49.694 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:49.694 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:49.694 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:49.694 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:49.694 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:49.694 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:49.694 { 01:22:49.694 "auth": { 01:22:49.694 "dhgroup": "ffdhe4096", 01:22:49.694 "digest": "sha256", 01:22:49.694 "state": "completed" 01:22:49.694 }, 01:22:49.694 "cntlid": 31, 01:22:49.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:22:49.694 "listen_address": { 01:22:49.694 "adrfam": "IPv4", 01:22:49.694 "traddr": "10.0.0.3", 01:22:49.694 "trsvcid": "4420", 01:22:49.694 "trtype": "TCP" 01:22:49.694 }, 01:22:49.694 "peer_address": { 01:22:49.694 "adrfam": "IPv4", 01:22:49.694 "traddr": "10.0.0.1", 01:22:49.694 "trsvcid": "40966", 01:22:49.694 "trtype": "TCP" 01:22:49.694 }, 01:22:49.694 "qid": 0, 01:22:49.694 "state": "enabled", 01:22:49.694 "thread": "nvmf_tgt_poll_group_000" 01:22:49.694 } 01:22:49.694 ]' 01:22:49.694 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:49.694 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:22:49.694 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:49.953 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:22:49.953 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:49.953 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:49.953 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:49.953 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:50.211 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:22:50.211 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:22:50.777 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:50.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:50.777 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:50.777 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:50.777 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:50.777 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:50.777 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:22:50.777 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:22:50.777 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:22:50.777 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:22:50.777 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 01:22:50.777 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:50.777 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:22:50.777 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:22:50.777 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:22:50.777 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:50.777 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:50.777 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:50.777 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:50.777 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:50.777 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:50.777 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:50.777 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:51.343 01:22:51.343 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:51.343 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:51.343 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:51.601 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:51.601 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:51.601 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:51.601 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:51.601 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:51.601 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:51.601 { 01:22:51.601 "auth": { 01:22:51.601 "dhgroup": "ffdhe6144", 01:22:51.601 "digest": "sha256", 01:22:51.601 "state": "completed" 01:22:51.601 }, 01:22:51.601 "cntlid": 33, 01:22:51.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:22:51.601 "listen_address": { 01:22:51.601 "adrfam": "IPv4", 01:22:51.601 "traddr": "10.0.0.3", 01:22:51.601 "trsvcid": "4420", 01:22:51.601 "trtype": "TCP" 01:22:51.601 }, 01:22:51.601 "peer_address": { 01:22:51.601 "adrfam": "IPv4", 01:22:51.601 "traddr": "10.0.0.1", 01:22:51.601 "trsvcid": "40990", 01:22:51.602 "trtype": "TCP" 01:22:51.602 }, 01:22:51.602 "qid": 0, 01:22:51.602 "state": "enabled", 01:22:51.602 "thread": "nvmf_tgt_poll_group_000" 01:22:51.602 } 01:22:51.602 ]' 01:22:51.602 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:51.602 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:22:51.602 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:51.602 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:22:51.602 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:51.602 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:51.602 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:51.602 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:51.860 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:22:51.860 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:22:52.426 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:52.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:52.426 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:52.426 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:52.426 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:52.426 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:52.426 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:22:52.426 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:22:52.426 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:22:52.684 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 01:22:52.684 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:52.684 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:22:52.684 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:22:52.684 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:22:52.684 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:52.684 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:52.684 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:52.684 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:52.684 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:52.684 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:52.684 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:52.684 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:52.942 01:22:52.942 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:52.942 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:52.942 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:53.200 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:53.200 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:53.200 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:53.201 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:53.201 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:53.201 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:53.201 { 01:22:53.201 "auth": { 01:22:53.201 "dhgroup": "ffdhe6144", 01:22:53.201 "digest": "sha256", 01:22:53.201 "state": "completed" 01:22:53.201 }, 01:22:53.201 "cntlid": 35, 01:22:53.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:22:53.201 "listen_address": { 01:22:53.201 "adrfam": "IPv4", 01:22:53.201 "traddr": "10.0.0.3", 01:22:53.201 "trsvcid": "4420", 01:22:53.201 "trtype": "TCP" 01:22:53.201 }, 01:22:53.201 "peer_address": { 01:22:53.201 "adrfam": "IPv4", 01:22:53.201 "traddr": "10.0.0.1", 01:22:53.201 "trsvcid": "41018", 01:22:53.201 "trtype": "TCP" 01:22:53.201 }, 01:22:53.201 "qid": 0, 01:22:53.201 "state": "enabled", 01:22:53.201 "thread": "nvmf_tgt_poll_group_000" 01:22:53.201 } 01:22:53.201 ]' 01:22:53.201 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:53.201 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:22:53.201 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:53.201 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:22:53.201 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:53.201 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:53.201 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:53.201 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:53.459 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:22:53.459 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:22:54.025 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:54.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:54.025 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:54.025 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:54.025 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:54.025 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:54.025 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:22:54.025 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:22:54.025 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:22:54.282 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 01:22:54.282 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:54.282 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:22:54.282 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:22:54.282 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:22:54.282 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:54.282 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:22:54.282 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:54.282 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:54.282 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:54.282 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:22:54.282 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:22:54.282 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:22:54.539 01:22:54.539 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:54.539 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:54.539 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:54.797 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:54.797 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:54.797 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:54.797 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:54.797 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:54.797 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:54.797 { 01:22:54.797 "auth": { 01:22:54.797 "dhgroup": "ffdhe6144", 01:22:54.797 "digest": "sha256", 01:22:54.797 "state": "completed" 01:22:54.797 }, 01:22:54.797 "cntlid": 37, 01:22:54.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:22:54.797 "listen_address": { 01:22:54.797 "adrfam": "IPv4", 01:22:54.797 "traddr": "10.0.0.3", 01:22:54.797 "trsvcid": "4420", 01:22:54.797 "trtype": "TCP" 01:22:54.797 }, 01:22:54.797 "peer_address": { 01:22:54.797 "adrfam": "IPv4", 01:22:54.797 "traddr": "10.0.0.1", 01:22:54.797 "trsvcid": "41046", 01:22:54.797 "trtype": "TCP" 01:22:54.797 }, 01:22:54.797 "qid": 0, 01:22:54.797 "state": "enabled", 01:22:54.797 "thread": "nvmf_tgt_poll_group_000" 01:22:54.797 } 01:22:54.797 ]' 01:22:54.797 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:54.797 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:22:54.797 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:54.797 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:22:54.797 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:55.057 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:55.057 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:55.057 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:55.057 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:22:55.057 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:22:55.626 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:55.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:55.626 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:55.626 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:55.626 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:55.626 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:55.626 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:22:55.626 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:22:55.626 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:22:55.882 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 01:22:55.882 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:55.882 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:22:55.882 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:22:55.882 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:22:55.882 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:55.882 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key3 01:22:55.882 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:55.882 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:55.882 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:55.882 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:22:55.882 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:22:55.882 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:22:56.451 01:22:56.451 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:56.451 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:56.451 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:56.451 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:56.451 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:56.451 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:56.451 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:56.451 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:56.451 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:56.451 { 01:22:56.451 "auth": { 01:22:56.451 "dhgroup": "ffdhe6144", 01:22:56.451 "digest": "sha256", 01:22:56.451 "state": "completed" 01:22:56.451 }, 01:22:56.451 "cntlid": 39, 01:22:56.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:22:56.451 "listen_address": { 01:22:56.451 "adrfam": "IPv4", 01:22:56.451 "traddr": "10.0.0.3", 01:22:56.451 "trsvcid": "4420", 01:22:56.451 "trtype": "TCP" 01:22:56.451 }, 01:22:56.451 "peer_address": { 01:22:56.451 "adrfam": "IPv4", 01:22:56.451 "traddr": "10.0.0.1", 01:22:56.451 "trsvcid": "41070", 01:22:56.451 "trtype": "TCP" 01:22:56.451 }, 01:22:56.451 "qid": 0, 01:22:56.451 "state": "enabled", 01:22:56.451 "thread": "nvmf_tgt_poll_group_000" 01:22:56.451 } 01:22:56.451 ]' 01:22:56.451 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:56.711 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:22:56.711 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:56.711 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:22:56.711 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:56.711 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:56.711 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:56.711 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:56.971 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:22:56.971 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:22:57.538 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:57.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:57.538 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:57.538 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:57.538 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:57.538 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:57.538 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:22:57.538 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:22:57.538 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:22:57.538 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:22:57.538 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 01:22:57.538 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:57.538 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:22:57.538 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:22:57.538 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:22:57.538 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:57.538 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:57.538 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:57.538 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:57.538 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:57.538 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:57.538 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:57.538 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:58.109 01:22:58.109 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:58.109 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:58.109 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:58.432 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:58.432 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:58.432 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:58.432 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:58.432 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:58.432 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:58.432 { 01:22:58.432 "auth": { 01:22:58.432 "dhgroup": "ffdhe8192", 01:22:58.432 "digest": "sha256", 01:22:58.432 "state": "completed" 01:22:58.432 }, 01:22:58.432 "cntlid": 41, 01:22:58.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:22:58.432 "listen_address": { 01:22:58.432 "adrfam": "IPv4", 01:22:58.432 "traddr": "10.0.0.3", 01:22:58.432 "trsvcid": "4420", 01:22:58.432 "trtype": "TCP" 01:22:58.432 }, 01:22:58.432 "peer_address": { 01:22:58.432 "adrfam": "IPv4", 01:22:58.432 "traddr": "10.0.0.1", 01:22:58.432 "trsvcid": "57628", 01:22:58.432 "trtype": "TCP" 01:22:58.432 }, 01:22:58.432 "qid": 0, 01:22:58.432 "state": "enabled", 01:22:58.432 "thread": "nvmf_tgt_poll_group_000" 01:22:58.432 } 01:22:58.432 ]' 01:22:58.432 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:58.432 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:22:58.432 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:58.432 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:22:58.432 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:58.432 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:58.432 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:58.432 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:58.692 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:22:58.692 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:22:59.261 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:59.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:59.261 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:22:59.261 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:59.261 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:59.261 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:59.261 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:22:59.261 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:22:59.261 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:22:59.523 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 01:22:59.523 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:59.523 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:22:59.523 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:22:59.523 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:22:59.523 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:59.523 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:59.523 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:59.523 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:59.523 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:59.523 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:59.523 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:59.523 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:00.095 01:23:00.095 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:00.095 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:00.095 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:00.354 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:00.355 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:00.355 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:00.355 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:00.355 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:00.355 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:00.355 { 01:23:00.355 "auth": { 01:23:00.355 "dhgroup": "ffdhe8192", 01:23:00.355 "digest": "sha256", 01:23:00.355 "state": "completed" 01:23:00.355 }, 01:23:00.355 "cntlid": 43, 01:23:00.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:00.355 "listen_address": { 01:23:00.355 "adrfam": "IPv4", 01:23:00.355 "traddr": "10.0.0.3", 01:23:00.355 "trsvcid": "4420", 01:23:00.355 "trtype": "TCP" 01:23:00.355 }, 01:23:00.355 "peer_address": { 01:23:00.355 "adrfam": "IPv4", 01:23:00.355 "traddr": "10.0.0.1", 01:23:00.355 "trsvcid": "57668", 01:23:00.355 "trtype": "TCP" 01:23:00.355 }, 01:23:00.355 "qid": 0, 01:23:00.355 "state": "enabled", 01:23:00.355 "thread": "nvmf_tgt_poll_group_000" 01:23:00.355 } 01:23:00.355 ]' 01:23:00.355 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:00.355 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:23:00.355 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:00.355 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:23:00.355 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:00.355 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:00.355 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:00.355 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:00.614 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:23:00.615 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:23:01.183 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:01.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:01.183 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:01.183 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:01.183 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:01.183 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:01.183 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:01.183 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:23:01.183 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:23:01.441 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 01:23:01.441 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:01.441 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:23:01.441 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:23:01.441 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:23:01.441 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:01.441 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:01.441 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:01.441 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:01.441 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:01.441 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:01.441 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:01.441 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:02.007 01:23:02.007 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:02.007 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:02.007 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:02.007 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:02.008 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:02.008 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:02.008 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:02.008 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:02.008 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:02.008 { 01:23:02.008 "auth": { 01:23:02.008 "dhgroup": "ffdhe8192", 01:23:02.008 "digest": "sha256", 01:23:02.008 "state": "completed" 01:23:02.008 }, 01:23:02.008 "cntlid": 45, 01:23:02.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:02.008 "listen_address": { 01:23:02.008 "adrfam": "IPv4", 01:23:02.008 "traddr": "10.0.0.3", 01:23:02.008 "trsvcid": "4420", 01:23:02.008 "trtype": "TCP" 01:23:02.008 }, 01:23:02.008 "peer_address": { 01:23:02.008 "adrfam": "IPv4", 01:23:02.008 "traddr": "10.0.0.1", 01:23:02.008 "trsvcid": "57692", 01:23:02.008 "trtype": "TCP" 01:23:02.008 }, 01:23:02.008 "qid": 0, 01:23:02.008 "state": "enabled", 01:23:02.008 "thread": "nvmf_tgt_poll_group_000" 01:23:02.008 } 01:23:02.008 ]' 01:23:02.008 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:02.008 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:23:02.008 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:02.266 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:23:02.266 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:02.266 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:02.266 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:02.266 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:02.525 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:23:02.525 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:23:03.091 05:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:03.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:03.091 05:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:03.091 05:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:03.091 05:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:03.091 05:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:03.091 05:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:03.091 05:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:23:03.091 05:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:23:03.091 05:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 01:23:03.091 05:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:03.091 05:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:23:03.091 05:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:23:03.091 05:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:23:03.091 05:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:03.091 05:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key3 01:23:03.091 05:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:03.091 05:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:03.091 05:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:03.091 05:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:23:03.091 05:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:23:03.091 05:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:23:03.657 01:23:03.657 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:03.657 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:03.657 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:03.919 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:03.919 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:03.919 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:03.919 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:03.919 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:03.919 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:03.919 { 01:23:03.919 "auth": { 01:23:03.919 "dhgroup": "ffdhe8192", 01:23:03.919 "digest": "sha256", 01:23:03.919 "state": "completed" 01:23:03.919 }, 01:23:03.919 "cntlid": 47, 01:23:03.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:03.919 "listen_address": { 01:23:03.919 "adrfam": "IPv4", 01:23:03.919 "traddr": "10.0.0.3", 01:23:03.919 "trsvcid": "4420", 01:23:03.919 "trtype": "TCP" 01:23:03.919 }, 01:23:03.919 "peer_address": { 01:23:03.919 "adrfam": "IPv4", 01:23:03.919 "traddr": "10.0.0.1", 01:23:03.919 "trsvcid": "57728", 01:23:03.919 "trtype": "TCP" 01:23:03.919 }, 01:23:03.919 "qid": 0, 01:23:03.919 "state": "enabled", 01:23:03.919 "thread": "nvmf_tgt_poll_group_000" 01:23:03.919 } 01:23:03.919 ]' 01:23:03.919 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:03.919 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:23:03.919 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:03.920 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:23:03.920 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:03.920 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:03.920 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:03.920 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:04.177 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:23:04.177 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:23:04.743 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:04.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:04.743 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:04.743 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:04.743 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:04.743 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:04.743 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 01:23:04.743 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:23:04.743 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:04.743 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:23:04.743 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:23:05.002 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 01:23:05.002 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:05.002 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:05.002 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:23:05.002 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:23:05.002 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:05.002 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:05.002 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:05.002 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:05.002 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:05.002 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:05.002 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:05.002 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:05.260 01:23:05.260 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:05.260 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:05.260 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:05.517 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:05.517 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:05.517 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:05.517 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:05.517 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:05.517 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:05.517 { 01:23:05.517 "auth": { 01:23:05.517 "dhgroup": "null", 01:23:05.517 "digest": "sha384", 01:23:05.517 "state": "completed" 01:23:05.517 }, 01:23:05.517 "cntlid": 49, 01:23:05.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:05.517 "listen_address": { 01:23:05.517 "adrfam": "IPv4", 01:23:05.517 "traddr": "10.0.0.3", 01:23:05.517 "trsvcid": "4420", 01:23:05.517 "trtype": "TCP" 01:23:05.517 }, 01:23:05.517 "peer_address": { 01:23:05.517 "adrfam": "IPv4", 01:23:05.517 "traddr": "10.0.0.1", 01:23:05.517 "trsvcid": "57758", 01:23:05.517 "trtype": "TCP" 01:23:05.517 }, 01:23:05.517 "qid": 0, 01:23:05.517 "state": "enabled", 01:23:05.517 "thread": "nvmf_tgt_poll_group_000" 01:23:05.517 } 01:23:05.517 ]' 01:23:05.517 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:05.517 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:05.517 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:05.517 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:23:05.517 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:05.517 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:05.517 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:05.517 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:05.776 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:23:05.776 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:23:06.350 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:06.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:06.350 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:06.350 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:06.350 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:06.350 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:06.350 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:06.351 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:23:06.351 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:23:06.609 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 01:23:06.609 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:06.609 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:06.609 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:23:06.609 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:23:06.609 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:06.609 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:06.609 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:06.609 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:06.609 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:06.609 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:06.609 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:06.609 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:06.866 01:23:06.866 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:06.866 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:06.866 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:07.124 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:07.124 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:07.124 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:07.124 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:07.124 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:07.124 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:07.124 { 01:23:07.124 "auth": { 01:23:07.124 "dhgroup": "null", 01:23:07.124 "digest": "sha384", 01:23:07.124 "state": "completed" 01:23:07.124 }, 01:23:07.124 "cntlid": 51, 01:23:07.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:07.124 "listen_address": { 01:23:07.124 "adrfam": "IPv4", 01:23:07.124 "traddr": "10.0.0.3", 01:23:07.124 "trsvcid": "4420", 01:23:07.124 "trtype": "TCP" 01:23:07.124 }, 01:23:07.124 "peer_address": { 01:23:07.124 "adrfam": "IPv4", 01:23:07.124 "traddr": "10.0.0.1", 01:23:07.124 "trsvcid": "57792", 01:23:07.124 "trtype": "TCP" 01:23:07.124 }, 01:23:07.124 "qid": 0, 01:23:07.124 "state": "enabled", 01:23:07.124 "thread": "nvmf_tgt_poll_group_000" 01:23:07.124 } 01:23:07.124 ]' 01:23:07.124 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:07.124 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:07.124 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:07.124 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:23:07.124 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:07.124 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:07.124 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:07.124 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:07.382 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:23:07.382 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:23:07.950 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:07.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:07.950 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:07.950 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:07.950 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:07.950 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:07.950 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:07.950 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:23:07.950 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:23:08.209 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 01:23:08.209 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:08.209 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:08.209 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:23:08.209 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:23:08.209 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:08.209 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:08.209 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:08.209 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:08.209 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:08.209 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:08.209 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:08.209 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:08.467 01:23:08.467 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:08.467 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:08.467 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:08.724 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:08.725 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:08.725 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:08.725 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:08.725 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:08.725 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:08.725 { 01:23:08.725 "auth": { 01:23:08.725 "dhgroup": "null", 01:23:08.725 "digest": "sha384", 01:23:08.725 "state": "completed" 01:23:08.725 }, 01:23:08.725 "cntlid": 53, 01:23:08.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:08.725 "listen_address": { 01:23:08.725 "adrfam": "IPv4", 01:23:08.725 "traddr": "10.0.0.3", 01:23:08.725 "trsvcid": "4420", 01:23:08.725 "trtype": "TCP" 01:23:08.725 }, 01:23:08.725 "peer_address": { 01:23:08.725 "adrfam": "IPv4", 01:23:08.725 "traddr": "10.0.0.1", 01:23:08.725 "trsvcid": "48434", 01:23:08.725 "trtype": "TCP" 01:23:08.725 }, 01:23:08.725 "qid": 0, 01:23:08.725 "state": "enabled", 01:23:08.725 "thread": "nvmf_tgt_poll_group_000" 01:23:08.725 } 01:23:08.725 ]' 01:23:08.725 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:08.725 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:08.725 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:08.725 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:23:08.725 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:08.725 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:08.725 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:08.725 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:08.982 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:23:08.982 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:23:09.549 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:09.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:09.549 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:09.549 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:09.549 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:09.549 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:09.549 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:09.549 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:23:09.549 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:23:09.807 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 01:23:09.807 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:09.808 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:09.808 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:23:09.808 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:23:09.808 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:09.808 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key3 01:23:09.808 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:09.808 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:09.808 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:09.808 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:23:09.808 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:23:09.808 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:23:10.066 01:23:10.066 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:10.066 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:10.066 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:10.323 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:10.323 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:10.323 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:10.323 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:10.323 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:10.323 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:10.323 { 01:23:10.323 "auth": { 01:23:10.323 "dhgroup": "null", 01:23:10.323 "digest": "sha384", 01:23:10.323 "state": "completed" 01:23:10.323 }, 01:23:10.323 "cntlid": 55, 01:23:10.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:10.323 "listen_address": { 01:23:10.323 "adrfam": "IPv4", 01:23:10.323 "traddr": "10.0.0.3", 01:23:10.323 "trsvcid": "4420", 01:23:10.323 "trtype": "TCP" 01:23:10.323 }, 01:23:10.323 "peer_address": { 01:23:10.323 "adrfam": "IPv4", 01:23:10.323 "traddr": "10.0.0.1", 01:23:10.323 "trsvcid": "48442", 01:23:10.323 "trtype": "TCP" 01:23:10.323 }, 01:23:10.323 "qid": 0, 01:23:10.323 "state": "enabled", 01:23:10.323 "thread": "nvmf_tgt_poll_group_000" 01:23:10.323 } 01:23:10.323 ]' 01:23:10.324 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:10.324 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:10.324 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:10.324 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:23:10.324 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:10.324 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:10.324 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:10.324 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:10.617 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:23:10.617 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:23:11.187 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:11.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:11.187 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:11.187 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:11.187 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:11.187 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:11.187 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:23:11.187 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:11.187 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:23:11.187 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:23:11.446 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 01:23:11.446 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:11.446 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:11.446 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:23:11.446 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:23:11.446 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:11.446 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:11.446 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:11.446 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:11.446 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:11.446 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:11.446 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:11.446 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:11.706 01:23:11.706 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:11.706 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:11.706 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:11.966 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:11.966 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:11.966 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:11.966 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:11.966 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:11.966 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:11.966 { 01:23:11.966 "auth": { 01:23:11.966 "dhgroup": "ffdhe2048", 01:23:11.966 "digest": "sha384", 01:23:11.966 "state": "completed" 01:23:11.966 }, 01:23:11.966 "cntlid": 57, 01:23:11.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:11.966 "listen_address": { 01:23:11.966 "adrfam": "IPv4", 01:23:11.966 "traddr": "10.0.0.3", 01:23:11.966 "trsvcid": "4420", 01:23:11.966 "trtype": "TCP" 01:23:11.966 }, 01:23:11.966 "peer_address": { 01:23:11.966 "adrfam": "IPv4", 01:23:11.966 "traddr": "10.0.0.1", 01:23:11.966 "trsvcid": "48470", 01:23:11.966 "trtype": "TCP" 01:23:11.966 }, 01:23:11.966 "qid": 0, 01:23:11.966 "state": "enabled", 01:23:11.966 "thread": "nvmf_tgt_poll_group_000" 01:23:11.966 } 01:23:11.966 ]' 01:23:11.967 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:11.967 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:11.967 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:11.967 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:23:11.967 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:11.967 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:11.967 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:11.967 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:12.226 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:23:12.226 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:23:12.795 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:12.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:12.795 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:12.795 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:12.795 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:12.795 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:12.795 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:12.795 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:23:12.795 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:23:13.056 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 01:23:13.056 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:13.056 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:13.056 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:23:13.056 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:23:13.056 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:13.056 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:13.056 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:13.056 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:13.056 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:13.056 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:13.056 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:13.056 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:13.316 01:23:13.316 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:13.316 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:13.316 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:13.585 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:13.585 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:13.585 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:13.585 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:13.585 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:13.585 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:13.585 { 01:23:13.585 "auth": { 01:23:13.585 "dhgroup": "ffdhe2048", 01:23:13.585 "digest": "sha384", 01:23:13.585 "state": "completed" 01:23:13.585 }, 01:23:13.585 "cntlid": 59, 01:23:13.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:13.585 "listen_address": { 01:23:13.585 "adrfam": "IPv4", 01:23:13.585 "traddr": "10.0.0.3", 01:23:13.585 "trsvcid": "4420", 01:23:13.585 "trtype": "TCP" 01:23:13.585 }, 01:23:13.585 "peer_address": { 01:23:13.585 "adrfam": "IPv4", 01:23:13.585 "traddr": "10.0.0.1", 01:23:13.585 "trsvcid": "48492", 01:23:13.585 "trtype": "TCP" 01:23:13.585 }, 01:23:13.585 "qid": 0, 01:23:13.585 "state": "enabled", 01:23:13.585 "thread": "nvmf_tgt_poll_group_000" 01:23:13.585 } 01:23:13.585 ]' 01:23:13.585 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:13.585 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:13.585 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:13.585 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:23:13.585 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:13.585 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:13.585 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:13.585 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:13.846 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:23:13.846 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:23:14.414 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:14.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:14.414 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:14.414 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:14.414 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:14.414 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:14.414 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:14.414 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:23:14.414 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:23:14.672 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 01:23:14.672 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:14.672 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:14.672 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:23:14.672 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:23:14.672 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:14.672 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:14.672 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:14.672 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:14.672 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:14.672 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:14.672 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:14.672 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:14.931 01:23:14.931 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:14.931 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:14.931 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:15.190 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:15.190 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:15.190 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:15.190 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:15.190 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:15.190 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:15.190 { 01:23:15.190 "auth": { 01:23:15.190 "dhgroup": "ffdhe2048", 01:23:15.190 "digest": "sha384", 01:23:15.190 "state": "completed" 01:23:15.190 }, 01:23:15.191 "cntlid": 61, 01:23:15.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:15.191 "listen_address": { 01:23:15.191 "adrfam": "IPv4", 01:23:15.191 "traddr": "10.0.0.3", 01:23:15.191 "trsvcid": "4420", 01:23:15.191 "trtype": "TCP" 01:23:15.191 }, 01:23:15.191 "peer_address": { 01:23:15.191 "adrfam": "IPv4", 01:23:15.191 "traddr": "10.0.0.1", 01:23:15.191 "trsvcid": "48524", 01:23:15.191 "trtype": "TCP" 01:23:15.191 }, 01:23:15.191 "qid": 0, 01:23:15.191 "state": "enabled", 01:23:15.191 "thread": "nvmf_tgt_poll_group_000" 01:23:15.191 } 01:23:15.191 ]' 01:23:15.191 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:15.191 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:15.191 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:15.191 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:23:15.191 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:15.191 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:15.191 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:15.191 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:15.451 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:23:15.451 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:23:16.021 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:16.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:16.021 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:16.021 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:16.021 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:16.021 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:16.021 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:16.021 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:23:16.021 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:23:16.290 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 01:23:16.290 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:16.290 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:16.290 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:23:16.290 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:23:16.290 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:16.290 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key3 01:23:16.290 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:16.290 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:16.290 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:16.290 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:23:16.290 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:23:16.290 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:23:16.550 01:23:16.550 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:16.550 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:16.550 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:16.810 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:16.810 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:16.810 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:16.810 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:16.810 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:16.810 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:16.810 { 01:23:16.810 "auth": { 01:23:16.810 "dhgroup": "ffdhe2048", 01:23:16.810 "digest": "sha384", 01:23:16.810 "state": "completed" 01:23:16.810 }, 01:23:16.810 "cntlid": 63, 01:23:16.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:16.810 "listen_address": { 01:23:16.810 "adrfam": "IPv4", 01:23:16.810 "traddr": "10.0.0.3", 01:23:16.810 "trsvcid": "4420", 01:23:16.810 "trtype": "TCP" 01:23:16.810 }, 01:23:16.810 "peer_address": { 01:23:16.810 "adrfam": "IPv4", 01:23:16.810 "traddr": "10.0.0.1", 01:23:16.810 "trsvcid": "48544", 01:23:16.810 "trtype": "TCP" 01:23:16.810 }, 01:23:16.810 "qid": 0, 01:23:16.810 "state": "enabled", 01:23:16.810 "thread": "nvmf_tgt_poll_group_000" 01:23:16.810 } 01:23:16.810 ]' 01:23:16.810 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:16.810 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:16.810 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:16.810 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:23:16.810 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:16.810 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:16.810 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:16.810 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:17.070 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:23:17.070 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:23:17.641 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:17.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:17.641 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:17.641 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:17.641 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:17.641 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:17.641 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:23:17.641 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:17.641 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:23:17.641 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:23:17.903 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 01:23:17.903 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:17.903 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:17.903 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:23:17.903 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:23:17.903 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:17.903 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:17.903 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:17.903 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:17.903 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:17.903 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:17.903 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:17.903 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:18.162 01:23:18.162 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:18.162 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:18.162 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:18.424 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:18.424 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:18.424 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:18.424 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:18.424 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:18.424 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:18.424 { 01:23:18.424 "auth": { 01:23:18.424 "dhgroup": "ffdhe3072", 01:23:18.424 "digest": "sha384", 01:23:18.424 "state": "completed" 01:23:18.424 }, 01:23:18.424 "cntlid": 65, 01:23:18.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:18.424 "listen_address": { 01:23:18.424 "adrfam": "IPv4", 01:23:18.424 "traddr": "10.0.0.3", 01:23:18.424 "trsvcid": "4420", 01:23:18.424 "trtype": "TCP" 01:23:18.424 }, 01:23:18.424 "peer_address": { 01:23:18.424 "adrfam": "IPv4", 01:23:18.424 "traddr": "10.0.0.1", 01:23:18.424 "trsvcid": "48688", 01:23:18.424 "trtype": "TCP" 01:23:18.424 }, 01:23:18.424 "qid": 0, 01:23:18.424 "state": "enabled", 01:23:18.424 "thread": "nvmf_tgt_poll_group_000" 01:23:18.424 } 01:23:18.424 ]' 01:23:18.424 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:18.424 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:18.424 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:18.424 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:23:18.424 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:18.424 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:18.424 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:18.424 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:18.685 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:23:18.686 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:23:19.258 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:19.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:19.258 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:19.258 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:19.258 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:19.258 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:19.258 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:19.258 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:23:19.258 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:23:19.526 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 01:23:19.526 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:19.526 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:19.526 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:23:19.526 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:23:19.526 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:19.526 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:19.526 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:19.526 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:19.526 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:19.526 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:19.526 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:19.526 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:19.787 01:23:19.787 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:19.787 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:19.787 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:20.047 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:20.047 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:20.047 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:20.047 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:20.047 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:20.047 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:20.047 { 01:23:20.047 "auth": { 01:23:20.047 "dhgroup": "ffdhe3072", 01:23:20.047 "digest": "sha384", 01:23:20.047 "state": "completed" 01:23:20.047 }, 01:23:20.047 "cntlid": 67, 01:23:20.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:20.047 "listen_address": { 01:23:20.047 "adrfam": "IPv4", 01:23:20.047 "traddr": "10.0.0.3", 01:23:20.047 "trsvcid": "4420", 01:23:20.047 "trtype": "TCP" 01:23:20.047 }, 01:23:20.047 "peer_address": { 01:23:20.047 "adrfam": "IPv4", 01:23:20.047 "traddr": "10.0.0.1", 01:23:20.047 "trsvcid": "48710", 01:23:20.047 "trtype": "TCP" 01:23:20.047 }, 01:23:20.047 "qid": 0, 01:23:20.047 "state": "enabled", 01:23:20.047 "thread": "nvmf_tgt_poll_group_000" 01:23:20.047 } 01:23:20.047 ]' 01:23:20.047 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:20.047 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:20.047 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:20.047 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:23:20.047 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:20.047 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:20.047 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:20.047 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:20.306 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:23:20.307 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:23:20.878 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:20.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:20.879 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:20.879 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:20.879 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:20.879 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:20.879 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:20.879 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:23:20.879 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:23:21.138 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 01:23:21.138 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:21.138 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:21.138 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:23:21.138 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:23:21.138 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:21.138 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:21.138 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:21.138 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:21.138 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:21.138 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:21.138 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:21.138 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:21.399 01:23:21.399 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:21.399 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:21.399 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:21.658 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:21.658 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:21.658 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:21.658 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:21.658 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:21.658 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:21.658 { 01:23:21.658 "auth": { 01:23:21.658 "dhgroup": "ffdhe3072", 01:23:21.658 "digest": "sha384", 01:23:21.658 "state": "completed" 01:23:21.658 }, 01:23:21.658 "cntlid": 69, 01:23:21.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:21.658 "listen_address": { 01:23:21.658 "adrfam": "IPv4", 01:23:21.659 "traddr": "10.0.0.3", 01:23:21.659 "trsvcid": "4420", 01:23:21.659 "trtype": "TCP" 01:23:21.659 }, 01:23:21.659 "peer_address": { 01:23:21.659 "adrfam": "IPv4", 01:23:21.659 "traddr": "10.0.0.1", 01:23:21.659 "trsvcid": "48726", 01:23:21.659 "trtype": "TCP" 01:23:21.659 }, 01:23:21.659 "qid": 0, 01:23:21.659 "state": "enabled", 01:23:21.659 "thread": "nvmf_tgt_poll_group_000" 01:23:21.659 } 01:23:21.659 ]' 01:23:21.659 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:21.659 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:21.659 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:21.659 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:23:21.659 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:21.659 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:21.659 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:21.659 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:21.920 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:23:21.920 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:23:22.486 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:22.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:22.486 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:22.486 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:22.486 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:22.486 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:22.486 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:22.486 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:23:22.487 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:23:22.745 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 01:23:22.745 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:22.745 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:22.745 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:23:22.745 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:23:22.745 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:22.745 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key3 01:23:22.745 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:22.745 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:22.745 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:22.745 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:23:22.745 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:23:22.745 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:23:23.003 01:23:23.003 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:23.003 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:23.003 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:23.003 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:23.003 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:23.003 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:23.003 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:23.261 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:23.261 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:23.261 { 01:23:23.261 "auth": { 01:23:23.261 "dhgroup": "ffdhe3072", 01:23:23.261 "digest": "sha384", 01:23:23.261 "state": "completed" 01:23:23.261 }, 01:23:23.261 "cntlid": 71, 01:23:23.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:23.261 "listen_address": { 01:23:23.261 "adrfam": "IPv4", 01:23:23.261 "traddr": "10.0.0.3", 01:23:23.261 "trsvcid": "4420", 01:23:23.261 "trtype": "TCP" 01:23:23.261 }, 01:23:23.261 "peer_address": { 01:23:23.261 "adrfam": "IPv4", 01:23:23.261 "traddr": "10.0.0.1", 01:23:23.261 "trsvcid": "48754", 01:23:23.261 "trtype": "TCP" 01:23:23.261 }, 01:23:23.261 "qid": 0, 01:23:23.261 "state": "enabled", 01:23:23.261 "thread": "nvmf_tgt_poll_group_000" 01:23:23.261 } 01:23:23.261 ]' 01:23:23.261 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:23.261 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:23.261 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:23.261 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:23:23.261 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:23.261 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:23.261 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:23.261 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:23.518 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:23:23.518 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:23:24.084 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:24.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:24.084 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:24.084 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:24.084 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:24.084 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:24.084 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:23:24.084 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:24.084 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:23:24.084 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:23:24.342 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 01:23:24.342 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:24.342 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:24.342 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:23:24.342 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:23:24.342 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:24.342 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:24.342 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:24.342 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:24.342 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:24.342 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:24.342 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:24.342 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:24.600 01:23:24.600 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:24.600 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:24.600 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:24.858 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:24.858 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:24.858 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:24.858 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:24.858 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:24.858 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:24.858 { 01:23:24.858 "auth": { 01:23:24.858 "dhgroup": "ffdhe4096", 01:23:24.858 "digest": "sha384", 01:23:24.858 "state": "completed" 01:23:24.858 }, 01:23:24.858 "cntlid": 73, 01:23:24.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:24.858 "listen_address": { 01:23:24.858 "adrfam": "IPv4", 01:23:24.858 "traddr": "10.0.0.3", 01:23:24.858 "trsvcid": "4420", 01:23:24.858 "trtype": "TCP" 01:23:24.858 }, 01:23:24.858 "peer_address": { 01:23:24.858 "adrfam": "IPv4", 01:23:24.858 "traddr": "10.0.0.1", 01:23:24.858 "trsvcid": "48774", 01:23:24.858 "trtype": "TCP" 01:23:24.858 }, 01:23:24.858 "qid": 0, 01:23:24.858 "state": "enabled", 01:23:24.858 "thread": "nvmf_tgt_poll_group_000" 01:23:24.858 } 01:23:24.858 ]' 01:23:24.858 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:24.858 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:24.858 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:24.858 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:23:24.858 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:24.858 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:24.858 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:24.858 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:25.115 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:23:25.115 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:23:25.681 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:25.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:25.681 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:25.681 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:25.681 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:25.681 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:25.681 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:25.681 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:23:25.681 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:23:25.939 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 01:23:25.939 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:25.939 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:25.939 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:23:25.939 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:23:25.939 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:25.939 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:25.939 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:25.939 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:25.939 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:25.939 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:25.939 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:25.939 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:26.195 01:23:26.195 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:26.195 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:26.195 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:26.452 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:26.452 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:26.452 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:26.452 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:26.452 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:26.452 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:26.452 { 01:23:26.452 "auth": { 01:23:26.452 "dhgroup": "ffdhe4096", 01:23:26.452 "digest": "sha384", 01:23:26.452 "state": "completed" 01:23:26.452 }, 01:23:26.452 "cntlid": 75, 01:23:26.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:26.452 "listen_address": { 01:23:26.452 "adrfam": "IPv4", 01:23:26.452 "traddr": "10.0.0.3", 01:23:26.452 "trsvcid": "4420", 01:23:26.452 "trtype": "TCP" 01:23:26.452 }, 01:23:26.452 "peer_address": { 01:23:26.452 "adrfam": "IPv4", 01:23:26.452 "traddr": "10.0.0.1", 01:23:26.452 "trsvcid": "48808", 01:23:26.452 "trtype": "TCP" 01:23:26.452 }, 01:23:26.452 "qid": 0, 01:23:26.452 "state": "enabled", 01:23:26.452 "thread": "nvmf_tgt_poll_group_000" 01:23:26.452 } 01:23:26.452 ]' 01:23:26.452 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:26.452 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:26.452 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:26.452 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:23:26.452 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:26.452 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:26.452 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:26.452 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:26.709 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:23:26.709 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:23:27.274 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:27.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:27.274 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:27.274 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:27.274 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:27.274 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:27.274 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:27.274 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:23:27.274 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:23:27.532 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 01:23:27.532 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:27.532 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:27.532 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:23:27.532 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:23:27.532 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:27.532 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:27.532 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:27.532 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:27.532 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:27.532 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:27.532 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:27.532 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:27.789 01:23:27.789 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:27.789 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:27.789 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:28.047 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:28.047 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:28.047 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:28.047 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:28.047 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:28.047 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:28.047 { 01:23:28.047 "auth": { 01:23:28.047 "dhgroup": "ffdhe4096", 01:23:28.047 "digest": "sha384", 01:23:28.047 "state": "completed" 01:23:28.047 }, 01:23:28.047 "cntlid": 77, 01:23:28.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:28.047 "listen_address": { 01:23:28.047 "adrfam": "IPv4", 01:23:28.047 "traddr": "10.0.0.3", 01:23:28.047 "trsvcid": "4420", 01:23:28.047 "trtype": "TCP" 01:23:28.047 }, 01:23:28.047 "peer_address": { 01:23:28.047 "adrfam": "IPv4", 01:23:28.047 "traddr": "10.0.0.1", 01:23:28.047 "trsvcid": "37752", 01:23:28.047 "trtype": "TCP" 01:23:28.047 }, 01:23:28.047 "qid": 0, 01:23:28.047 "state": "enabled", 01:23:28.047 "thread": "nvmf_tgt_poll_group_000" 01:23:28.047 } 01:23:28.047 ]' 01:23:28.047 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:28.047 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:28.047 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:28.047 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:23:28.047 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:28.047 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:28.047 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:28.047 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:28.306 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:23:28.306 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:23:28.870 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:28.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:28.870 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:28.870 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:28.870 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:28.870 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:28.870 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:28.870 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:23:28.870 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:23:29.129 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 01:23:29.129 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:29.129 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:29.129 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:23:29.129 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:23:29.129 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:29.129 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key3 01:23:29.129 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:29.129 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:29.129 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:29.129 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:23:29.129 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:23:29.129 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:23:29.387 01:23:29.387 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:29.387 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:29.387 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:29.645 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:29.645 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:29.645 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:29.645 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:29.645 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:29.645 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:29.645 { 01:23:29.645 "auth": { 01:23:29.645 "dhgroup": "ffdhe4096", 01:23:29.645 "digest": "sha384", 01:23:29.645 "state": "completed" 01:23:29.645 }, 01:23:29.645 "cntlid": 79, 01:23:29.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:29.645 "listen_address": { 01:23:29.645 "adrfam": "IPv4", 01:23:29.645 "traddr": "10.0.0.3", 01:23:29.645 "trsvcid": "4420", 01:23:29.645 "trtype": "TCP" 01:23:29.645 }, 01:23:29.645 "peer_address": { 01:23:29.645 "adrfam": "IPv4", 01:23:29.645 "traddr": "10.0.0.1", 01:23:29.645 "trsvcid": "37780", 01:23:29.645 "trtype": "TCP" 01:23:29.645 }, 01:23:29.645 "qid": 0, 01:23:29.645 "state": "enabled", 01:23:29.645 "thread": "nvmf_tgt_poll_group_000" 01:23:29.645 } 01:23:29.645 ]' 01:23:29.645 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:29.645 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:29.645 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:29.645 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:23:29.645 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:29.645 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:29.645 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:29.645 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:29.903 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:23:29.903 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:23:30.471 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:30.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:30.471 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:30.471 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:30.471 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:30.471 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:30.471 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:23:30.471 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:30.472 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:23:30.472 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:23:30.731 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 01:23:30.732 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:30.732 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:30.732 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:23:30.732 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:23:30.732 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:30.732 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:30.732 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:30.732 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:30.732 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:30.732 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:30.732 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:30.732 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:31.300 01:23:31.300 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:31.300 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:31.300 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:31.300 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:31.300 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:31.300 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:31.300 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:31.300 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:31.300 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:31.300 { 01:23:31.300 "auth": { 01:23:31.300 "dhgroup": "ffdhe6144", 01:23:31.300 "digest": "sha384", 01:23:31.300 "state": "completed" 01:23:31.300 }, 01:23:31.300 "cntlid": 81, 01:23:31.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:31.300 "listen_address": { 01:23:31.300 "adrfam": "IPv4", 01:23:31.300 "traddr": "10.0.0.3", 01:23:31.300 "trsvcid": "4420", 01:23:31.300 "trtype": "TCP" 01:23:31.300 }, 01:23:31.300 "peer_address": { 01:23:31.300 "adrfam": "IPv4", 01:23:31.300 "traddr": "10.0.0.1", 01:23:31.300 "trsvcid": "37804", 01:23:31.300 "trtype": "TCP" 01:23:31.300 }, 01:23:31.300 "qid": 0, 01:23:31.300 "state": "enabled", 01:23:31.300 "thread": "nvmf_tgt_poll_group_000" 01:23:31.300 } 01:23:31.300 ]' 01:23:31.300 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:31.300 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:31.300 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:31.557 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:23:31.558 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:31.558 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:31.558 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:31.558 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:31.558 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:23:31.558 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:23:32.123 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:32.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:32.124 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:32.124 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:32.124 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:32.124 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:32.124 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:32.124 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:23:32.124 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:23:32.385 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 01:23:32.385 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:32.385 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:32.385 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:23:32.385 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:23:32.385 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:32.385 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:32.385 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:32.385 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:32.385 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:32.385 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:32.385 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:32.385 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:32.954 01:23:32.954 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:32.954 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:32.954 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:32.954 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:32.954 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:32.954 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:32.954 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:32.954 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:32.954 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:32.954 { 01:23:32.954 "auth": { 01:23:32.954 "dhgroup": "ffdhe6144", 01:23:32.954 "digest": "sha384", 01:23:32.954 "state": "completed" 01:23:32.954 }, 01:23:32.954 "cntlid": 83, 01:23:32.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:32.954 "listen_address": { 01:23:32.954 "adrfam": "IPv4", 01:23:32.954 "traddr": "10.0.0.3", 01:23:32.954 "trsvcid": "4420", 01:23:32.954 "trtype": "TCP" 01:23:32.954 }, 01:23:32.954 "peer_address": { 01:23:32.954 "adrfam": "IPv4", 01:23:32.954 "traddr": "10.0.0.1", 01:23:32.954 "trsvcid": "37830", 01:23:32.954 "trtype": "TCP" 01:23:32.954 }, 01:23:32.954 "qid": 0, 01:23:32.954 "state": "enabled", 01:23:32.954 "thread": "nvmf_tgt_poll_group_000" 01:23:32.954 } 01:23:32.954 ]' 01:23:32.954 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:33.213 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:33.213 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:33.213 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:23:33.213 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:33.213 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:33.213 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:33.213 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:33.474 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:23:33.474 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:23:34.042 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:34.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:34.042 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:34.042 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:34.042 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:34.042 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:34.042 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:34.042 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:23:34.042 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:23:34.302 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 01:23:34.302 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:34.302 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:34.302 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:23:34.302 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:23:34.302 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:34.302 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:34.302 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:34.302 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:34.302 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:34.302 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:34.302 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:34.302 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:34.559 01:23:34.559 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:34.559 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:34.559 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:34.816 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:34.816 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:34.816 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:34.816 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:34.816 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:34.816 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:34.816 { 01:23:34.816 "auth": { 01:23:34.816 "dhgroup": "ffdhe6144", 01:23:34.816 "digest": "sha384", 01:23:34.816 "state": "completed" 01:23:34.816 }, 01:23:34.816 "cntlid": 85, 01:23:34.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:34.816 "listen_address": { 01:23:34.816 "adrfam": "IPv4", 01:23:34.816 "traddr": "10.0.0.3", 01:23:34.816 "trsvcid": "4420", 01:23:34.816 "trtype": "TCP" 01:23:34.816 }, 01:23:34.816 "peer_address": { 01:23:34.816 "adrfam": "IPv4", 01:23:34.816 "traddr": "10.0.0.1", 01:23:34.816 "trsvcid": "37858", 01:23:34.816 "trtype": "TCP" 01:23:34.816 }, 01:23:34.816 "qid": 0, 01:23:34.816 "state": "enabled", 01:23:34.816 "thread": "nvmf_tgt_poll_group_000" 01:23:34.816 } 01:23:34.816 ]' 01:23:34.816 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:34.816 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:34.816 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:34.816 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:23:34.816 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:34.816 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:34.816 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:34.816 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:35.073 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:23:35.073 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:23:35.638 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:35.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:35.638 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:35.638 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:35.638 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:35.638 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:35.638 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:35.638 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:23:35.638 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:23:35.954 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 01:23:35.954 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:35.954 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:35.954 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:23:35.954 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:23:35.954 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:35.954 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key3 01:23:35.954 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:35.954 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:35.954 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:35.954 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:23:35.954 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:23:35.954 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:23:36.211 01:23:36.211 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:36.211 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:36.211 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:36.469 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:36.469 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:36.469 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:36.469 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:36.469 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:36.469 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:36.469 { 01:23:36.469 "auth": { 01:23:36.469 "dhgroup": "ffdhe6144", 01:23:36.469 "digest": "sha384", 01:23:36.469 "state": "completed" 01:23:36.469 }, 01:23:36.469 "cntlid": 87, 01:23:36.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:36.469 "listen_address": { 01:23:36.469 "adrfam": "IPv4", 01:23:36.469 "traddr": "10.0.0.3", 01:23:36.469 "trsvcid": "4420", 01:23:36.469 "trtype": "TCP" 01:23:36.469 }, 01:23:36.469 "peer_address": { 01:23:36.469 "adrfam": "IPv4", 01:23:36.469 "traddr": "10.0.0.1", 01:23:36.469 "trsvcid": "37894", 01:23:36.469 "trtype": "TCP" 01:23:36.469 }, 01:23:36.469 "qid": 0, 01:23:36.469 "state": "enabled", 01:23:36.469 "thread": "nvmf_tgt_poll_group_000" 01:23:36.469 } 01:23:36.469 ]' 01:23:36.469 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:36.469 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:36.469 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:36.469 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:23:36.469 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:36.726 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:36.726 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:36.726 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:36.726 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:23:36.726 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:23:37.308 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:37.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:37.308 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:37.308 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:37.308 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:37.308 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:37.308 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:23:37.308 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:37.308 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:23:37.308 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:23:37.565 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 01:23:37.565 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:37.566 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:37.566 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:23:37.566 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:23:37.566 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:37.566 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:37.566 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:37.566 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:37.566 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:37.566 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:37.566 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:37.566 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:38.130 01:23:38.130 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:38.130 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:38.130 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:38.388 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:38.388 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:38.388 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:38.388 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:38.388 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:38.388 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:38.388 { 01:23:38.388 "auth": { 01:23:38.388 "dhgroup": "ffdhe8192", 01:23:38.388 "digest": "sha384", 01:23:38.388 "state": "completed" 01:23:38.388 }, 01:23:38.388 "cntlid": 89, 01:23:38.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:38.388 "listen_address": { 01:23:38.388 "adrfam": "IPv4", 01:23:38.388 "traddr": "10.0.0.3", 01:23:38.388 "trsvcid": "4420", 01:23:38.388 "trtype": "TCP" 01:23:38.388 }, 01:23:38.388 "peer_address": { 01:23:38.388 "adrfam": "IPv4", 01:23:38.388 "traddr": "10.0.0.1", 01:23:38.388 "trsvcid": "45638", 01:23:38.388 "trtype": "TCP" 01:23:38.388 }, 01:23:38.388 "qid": 0, 01:23:38.388 "state": "enabled", 01:23:38.388 "thread": "nvmf_tgt_poll_group_000" 01:23:38.388 } 01:23:38.388 ]' 01:23:38.388 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:38.388 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:38.388 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:38.388 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:23:38.388 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:38.388 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:38.388 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:38.388 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:38.646 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:23:38.646 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:23:39.210 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:39.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:39.210 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:39.210 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:39.210 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:39.210 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:39.210 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:39.210 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:23:39.210 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:23:39.468 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 01:23:39.468 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:39.468 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:39.468 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:23:39.468 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:23:39.468 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:39.468 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:39.468 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:39.468 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:39.468 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:39.468 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:39.468 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:39.468 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:40.031 01:23:40.031 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:40.031 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:40.031 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:40.287 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:40.287 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:40.287 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:40.287 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:40.287 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:40.287 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:40.287 { 01:23:40.287 "auth": { 01:23:40.287 "dhgroup": "ffdhe8192", 01:23:40.287 "digest": "sha384", 01:23:40.287 "state": "completed" 01:23:40.287 }, 01:23:40.287 "cntlid": 91, 01:23:40.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:40.287 "listen_address": { 01:23:40.287 "adrfam": "IPv4", 01:23:40.287 "traddr": "10.0.0.3", 01:23:40.287 "trsvcid": "4420", 01:23:40.287 "trtype": "TCP" 01:23:40.287 }, 01:23:40.287 "peer_address": { 01:23:40.287 "adrfam": "IPv4", 01:23:40.287 "traddr": "10.0.0.1", 01:23:40.287 "trsvcid": "45658", 01:23:40.287 "trtype": "TCP" 01:23:40.287 }, 01:23:40.287 "qid": 0, 01:23:40.287 "state": "enabled", 01:23:40.287 "thread": "nvmf_tgt_poll_group_000" 01:23:40.287 } 01:23:40.287 ]' 01:23:40.287 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:40.287 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:40.287 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:40.287 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:23:40.287 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:40.287 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:40.287 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:40.287 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:40.544 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:23:40.544 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:23:41.110 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:41.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:41.110 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:41.110 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:41.110 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:41.110 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:41.110 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:41.110 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:23:41.110 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:23:41.367 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 01:23:41.367 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:41.367 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:41.367 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:23:41.367 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:23:41.367 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:41.367 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:41.367 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:41.367 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:41.367 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:41.367 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:41.367 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:41.367 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:41.933 01:23:41.933 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:41.933 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:41.933 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:41.933 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:41.933 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:41.933 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:41.933 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:41.933 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:41.933 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:41.933 { 01:23:41.933 "auth": { 01:23:41.933 "dhgroup": "ffdhe8192", 01:23:41.933 "digest": "sha384", 01:23:41.933 "state": "completed" 01:23:41.933 }, 01:23:41.933 "cntlid": 93, 01:23:41.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:41.933 "listen_address": { 01:23:41.933 "adrfam": "IPv4", 01:23:41.933 "traddr": "10.0.0.3", 01:23:41.933 "trsvcid": "4420", 01:23:41.933 "trtype": "TCP" 01:23:41.933 }, 01:23:41.933 "peer_address": { 01:23:41.933 "adrfam": "IPv4", 01:23:41.933 "traddr": "10.0.0.1", 01:23:41.933 "trsvcid": "45684", 01:23:41.933 "trtype": "TCP" 01:23:41.933 }, 01:23:41.933 "qid": 0, 01:23:41.933 "state": "enabled", 01:23:41.933 "thread": "nvmf_tgt_poll_group_000" 01:23:41.933 } 01:23:41.933 ]' 01:23:41.933 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:41.933 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:41.933 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:42.191 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:23:42.191 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:42.191 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:42.191 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:42.191 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:42.448 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:23:42.448 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:23:43.013 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:43.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:43.013 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:43.013 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:43.013 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:43.013 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:43.013 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:43.013 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:23:43.013 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:23:43.013 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 01:23:43.013 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:43.013 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:23:43.013 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:23:43.013 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:23:43.013 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:43.013 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key3 01:23:43.013 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:43.013 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:43.013 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:43.013 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:23:43.013 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:23:43.013 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:23:43.577 01:23:43.577 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:43.577 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:43.577 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:43.834 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:43.834 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:43.834 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:43.834 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:43.834 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:43.834 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:43.834 { 01:23:43.834 "auth": { 01:23:43.834 "dhgroup": "ffdhe8192", 01:23:43.834 "digest": "sha384", 01:23:43.834 "state": "completed" 01:23:43.834 }, 01:23:43.834 "cntlid": 95, 01:23:43.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:43.834 "listen_address": { 01:23:43.834 "adrfam": "IPv4", 01:23:43.834 "traddr": "10.0.0.3", 01:23:43.834 "trsvcid": "4420", 01:23:43.834 "trtype": "TCP" 01:23:43.834 }, 01:23:43.834 "peer_address": { 01:23:43.834 "adrfam": "IPv4", 01:23:43.834 "traddr": "10.0.0.1", 01:23:43.834 "trsvcid": "45722", 01:23:43.834 "trtype": "TCP" 01:23:43.834 }, 01:23:43.834 "qid": 0, 01:23:43.834 "state": "enabled", 01:23:43.834 "thread": "nvmf_tgt_poll_group_000" 01:23:43.834 } 01:23:43.834 ]' 01:23:43.834 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:43.834 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:23:43.834 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:43.834 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:23:43.834 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:43.834 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:43.834 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:43.834 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:44.091 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:23:44.091 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:23:44.655 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:44.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:44.655 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:44.655 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:44.655 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:44.655 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:44.655 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 01:23:44.655 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:23:44.655 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:44.655 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:23:44.655 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:23:44.912 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 01:23:44.912 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:44.912 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:23:44.912 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:23:44.912 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:23:44.912 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:44.912 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:44.912 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:44.912 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:44.912 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:44.912 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:44.912 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:44.913 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:45.170 01:23:45.170 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:45.170 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:45.170 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:45.429 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:45.430 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:45.430 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:45.430 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:45.430 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:45.430 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:45.430 { 01:23:45.430 "auth": { 01:23:45.430 "dhgroup": "null", 01:23:45.430 "digest": "sha512", 01:23:45.430 "state": "completed" 01:23:45.430 }, 01:23:45.430 "cntlid": 97, 01:23:45.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:45.430 "listen_address": { 01:23:45.430 "adrfam": "IPv4", 01:23:45.430 "traddr": "10.0.0.3", 01:23:45.430 "trsvcid": "4420", 01:23:45.430 "trtype": "TCP" 01:23:45.430 }, 01:23:45.430 "peer_address": { 01:23:45.430 "adrfam": "IPv4", 01:23:45.430 "traddr": "10.0.0.1", 01:23:45.430 "trsvcid": "45744", 01:23:45.430 "trtype": "TCP" 01:23:45.430 }, 01:23:45.430 "qid": 0, 01:23:45.430 "state": "enabled", 01:23:45.430 "thread": "nvmf_tgt_poll_group_000" 01:23:45.430 } 01:23:45.430 ]' 01:23:45.430 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:45.430 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:23:45.430 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:45.430 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:23:45.430 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:45.430 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:45.430 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:45.430 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:45.687 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:23:45.687 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:23:46.256 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:46.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:46.256 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:46.256 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:46.256 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:46.256 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:46.256 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:46.256 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:23:46.256 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:23:46.514 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 01:23:46.514 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:46.514 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:23:46.514 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:23:46.514 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:23:46.514 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:46.514 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:46.514 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:46.514 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:46.514 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:46.514 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:46.514 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:46.514 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:46.771 01:23:46.771 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:46.771 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:46.771 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:47.028 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:47.028 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:47.029 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:47.029 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:47.029 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:47.029 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:47.029 { 01:23:47.029 "auth": { 01:23:47.029 "dhgroup": "null", 01:23:47.029 "digest": "sha512", 01:23:47.029 "state": "completed" 01:23:47.029 }, 01:23:47.029 "cntlid": 99, 01:23:47.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:47.029 "listen_address": { 01:23:47.029 "adrfam": "IPv4", 01:23:47.029 "traddr": "10.0.0.3", 01:23:47.029 "trsvcid": "4420", 01:23:47.029 "trtype": "TCP" 01:23:47.029 }, 01:23:47.029 "peer_address": { 01:23:47.029 "adrfam": "IPv4", 01:23:47.029 "traddr": "10.0.0.1", 01:23:47.029 "trsvcid": "45760", 01:23:47.029 "trtype": "TCP" 01:23:47.029 }, 01:23:47.029 "qid": 0, 01:23:47.029 "state": "enabled", 01:23:47.029 "thread": "nvmf_tgt_poll_group_000" 01:23:47.029 } 01:23:47.029 ]' 01:23:47.029 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:47.029 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:23:47.029 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:47.029 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:23:47.029 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:47.029 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:47.029 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:47.029 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:47.286 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:23:47.287 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:23:48.220 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:48.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:48.220 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:48.220 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:48.220 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:48.220 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:48.220 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:48.220 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:23:48.220 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:23:48.220 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 01:23:48.220 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:48.220 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:23:48.220 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:23:48.220 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:23:48.220 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:48.220 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:48.220 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:48.220 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:48.220 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:48.221 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:48.221 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:48.221 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:48.478 01:23:48.478 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:48.478 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:48.478 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:48.736 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:48.736 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:48.736 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:48.736 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:48.736 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:48.736 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:48.736 { 01:23:48.736 "auth": { 01:23:48.736 "dhgroup": "null", 01:23:48.736 "digest": "sha512", 01:23:48.736 "state": "completed" 01:23:48.736 }, 01:23:48.736 "cntlid": 101, 01:23:48.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:48.736 "listen_address": { 01:23:48.736 "adrfam": "IPv4", 01:23:48.736 "traddr": "10.0.0.3", 01:23:48.736 "trsvcid": "4420", 01:23:48.736 "trtype": "TCP" 01:23:48.736 }, 01:23:48.736 "peer_address": { 01:23:48.736 "adrfam": "IPv4", 01:23:48.736 "traddr": "10.0.0.1", 01:23:48.736 "trsvcid": "55206", 01:23:48.736 "trtype": "TCP" 01:23:48.736 }, 01:23:48.737 "qid": 0, 01:23:48.737 "state": "enabled", 01:23:48.737 "thread": "nvmf_tgt_poll_group_000" 01:23:48.737 } 01:23:48.737 ]' 01:23:48.737 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:48.737 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:23:48.737 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:48.737 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:23:48.737 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:48.994 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:48.994 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:48.994 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:48.994 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:23:48.994 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:23:49.559 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:49.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:49.559 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:49.559 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:49.559 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:49.559 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:49.559 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:49.559 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:23:49.559 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:23:49.817 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 01:23:49.817 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:49.817 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:23:49.817 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:23:49.817 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:23:49.817 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:49.817 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key3 01:23:49.817 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:49.817 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:49.817 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:49.817 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:23:49.817 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:23:49.817 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:23:50.075 01:23:50.075 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:50.075 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:50.075 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:50.333 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:50.333 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:50.333 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:50.333 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:50.333 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:50.333 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:50.333 { 01:23:50.333 "auth": { 01:23:50.333 "dhgroup": "null", 01:23:50.333 "digest": "sha512", 01:23:50.333 "state": "completed" 01:23:50.333 }, 01:23:50.333 "cntlid": 103, 01:23:50.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:50.333 "listen_address": { 01:23:50.333 "adrfam": "IPv4", 01:23:50.333 "traddr": "10.0.0.3", 01:23:50.333 "trsvcid": "4420", 01:23:50.333 "trtype": "TCP" 01:23:50.333 }, 01:23:50.333 "peer_address": { 01:23:50.333 "adrfam": "IPv4", 01:23:50.333 "traddr": "10.0.0.1", 01:23:50.333 "trsvcid": "55230", 01:23:50.333 "trtype": "TCP" 01:23:50.333 }, 01:23:50.333 "qid": 0, 01:23:50.333 "state": "enabled", 01:23:50.333 "thread": "nvmf_tgt_poll_group_000" 01:23:50.333 } 01:23:50.333 ]' 01:23:50.333 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:50.333 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:23:50.333 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:50.333 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:23:50.333 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:50.591 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:50.591 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:50.591 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:50.591 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:23:50.591 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:23:51.208 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:51.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:51.208 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:51.208 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:51.208 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:51.208 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:51.208 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:23:51.208 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:51.208 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:23:51.208 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:23:51.466 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 01:23:51.466 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:51.466 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:23:51.466 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:23:51.466 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:23:51.466 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:51.466 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:51.466 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:51.466 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:51.466 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:51.466 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:51.466 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:51.466 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:51.724 01:23:51.724 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:51.724 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:51.724 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:51.982 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:51.982 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:51.982 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:51.982 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:51.982 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:51.982 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:51.982 { 01:23:51.982 "auth": { 01:23:51.982 "dhgroup": "ffdhe2048", 01:23:51.982 "digest": "sha512", 01:23:51.982 "state": "completed" 01:23:51.982 }, 01:23:51.982 "cntlid": 105, 01:23:51.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:51.982 "listen_address": { 01:23:51.982 "adrfam": "IPv4", 01:23:51.982 "traddr": "10.0.0.3", 01:23:51.982 "trsvcid": "4420", 01:23:51.982 "trtype": "TCP" 01:23:51.982 }, 01:23:51.982 "peer_address": { 01:23:51.982 "adrfam": "IPv4", 01:23:51.982 "traddr": "10.0.0.1", 01:23:51.982 "trsvcid": "55264", 01:23:51.982 "trtype": "TCP" 01:23:51.982 }, 01:23:51.982 "qid": 0, 01:23:51.982 "state": "enabled", 01:23:51.982 "thread": "nvmf_tgt_poll_group_000" 01:23:51.982 } 01:23:51.982 ]' 01:23:51.982 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:51.982 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:23:51.982 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:51.982 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:23:51.982 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:51.982 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:51.982 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:51.982 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:52.240 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:23:52.240 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:23:52.804 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:52.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:52.804 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:52.804 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:52.804 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:52.804 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:52.804 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:52.804 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:23:52.804 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:23:52.804 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 01:23:52.804 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:52.804 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:23:52.804 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:23:52.804 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:23:52.804 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:52.804 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:52.804 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:52.804 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:52.805 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:52.805 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:52.805 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:52.805 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:53.061 01:23:53.061 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:53.061 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:53.061 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:53.318 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:53.318 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:53.318 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:53.318 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:53.318 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:53.318 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:53.318 { 01:23:53.318 "auth": { 01:23:53.318 "dhgroup": "ffdhe2048", 01:23:53.318 "digest": "sha512", 01:23:53.318 "state": "completed" 01:23:53.318 }, 01:23:53.318 "cntlid": 107, 01:23:53.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:53.318 "listen_address": { 01:23:53.318 "adrfam": "IPv4", 01:23:53.318 "traddr": "10.0.0.3", 01:23:53.318 "trsvcid": "4420", 01:23:53.318 "trtype": "TCP" 01:23:53.318 }, 01:23:53.318 "peer_address": { 01:23:53.318 "adrfam": "IPv4", 01:23:53.318 "traddr": "10.0.0.1", 01:23:53.318 "trsvcid": "55286", 01:23:53.318 "trtype": "TCP" 01:23:53.318 }, 01:23:53.318 "qid": 0, 01:23:53.318 "state": "enabled", 01:23:53.318 "thread": "nvmf_tgt_poll_group_000" 01:23:53.318 } 01:23:53.318 ]' 01:23:53.318 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:53.318 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:23:53.318 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:53.576 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:23:53.576 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:53.576 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:53.576 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:53.576 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:53.576 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:23:53.576 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:23:54.141 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:54.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:54.141 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:54.141 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:54.141 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:54.141 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:54.141 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:54.141 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:23:54.141 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:23:54.399 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 01:23:54.399 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:54.399 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:23:54.399 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:23:54.399 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:23:54.399 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:54.399 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:54.399 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:54.399 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:54.399 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:54.399 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:54.399 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:54.399 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:23:54.656 01:23:54.656 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:54.656 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:54.656 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:54.914 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:54.914 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:54.914 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:54.914 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:54.914 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:54.914 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:54.914 { 01:23:54.914 "auth": { 01:23:54.914 "dhgroup": "ffdhe2048", 01:23:54.914 "digest": "sha512", 01:23:54.914 "state": "completed" 01:23:54.914 }, 01:23:54.914 "cntlid": 109, 01:23:54.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:54.914 "listen_address": { 01:23:54.914 "adrfam": "IPv4", 01:23:54.914 "traddr": "10.0.0.3", 01:23:54.914 "trsvcid": "4420", 01:23:54.914 "trtype": "TCP" 01:23:54.914 }, 01:23:54.914 "peer_address": { 01:23:54.914 "adrfam": "IPv4", 01:23:54.914 "traddr": "10.0.0.1", 01:23:54.914 "trsvcid": "55298", 01:23:54.914 "trtype": "TCP" 01:23:54.914 }, 01:23:54.914 "qid": 0, 01:23:54.914 "state": "enabled", 01:23:54.914 "thread": "nvmf_tgt_poll_group_000" 01:23:54.914 } 01:23:54.914 ]' 01:23:54.914 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:54.914 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:23:54.914 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:54.914 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:23:54.914 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:54.914 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:54.914 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:54.914 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:55.172 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:23:55.172 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:23:55.738 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:55.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:55.738 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:55.738 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:55.738 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:55.738 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:55.738 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:55.738 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:23:55.738 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:23:56.011 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 01:23:56.011 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:56.011 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:23:56.011 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:23:56.011 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:23:56.011 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:56.011 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key3 01:23:56.012 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:56.012 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:56.012 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:56.012 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:23:56.012 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:23:56.012 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:23:56.269 01:23:56.269 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:56.269 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:56.269 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:56.527 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:56.527 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:56.527 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:56.527 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:56.527 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:56.527 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:56.527 { 01:23:56.527 "auth": { 01:23:56.527 "dhgroup": "ffdhe2048", 01:23:56.527 "digest": "sha512", 01:23:56.527 "state": "completed" 01:23:56.527 }, 01:23:56.527 "cntlid": 111, 01:23:56.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:56.527 "listen_address": { 01:23:56.527 "adrfam": "IPv4", 01:23:56.527 "traddr": "10.0.0.3", 01:23:56.527 "trsvcid": "4420", 01:23:56.527 "trtype": "TCP" 01:23:56.527 }, 01:23:56.527 "peer_address": { 01:23:56.527 "adrfam": "IPv4", 01:23:56.527 "traddr": "10.0.0.1", 01:23:56.527 "trsvcid": "55314", 01:23:56.527 "trtype": "TCP" 01:23:56.527 }, 01:23:56.527 "qid": 0, 01:23:56.527 "state": "enabled", 01:23:56.527 "thread": "nvmf_tgt_poll_group_000" 01:23:56.527 } 01:23:56.527 ]' 01:23:56.527 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:56.527 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:23:56.527 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:56.527 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:23:56.527 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:56.527 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:56.527 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:56.527 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:56.784 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:23:56.784 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:23:57.348 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:57.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:57.348 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:57.348 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:57.348 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:57.348 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:57.348 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:23:57.348 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:57.348 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:23:57.348 05:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:23:57.605 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 01:23:57.605 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:57.605 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:23:57.605 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:23:57.605 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:23:57.605 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:57.605 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:57.605 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:57.605 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:57.605 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:57.605 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:57.605 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:57.605 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:23:57.863 01:23:57.863 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:57.863 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:57.863 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:58.429 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:58.429 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:58.429 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:58.429 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:58.429 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:58.429 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:58.429 { 01:23:58.429 "auth": { 01:23:58.429 "dhgroup": "ffdhe3072", 01:23:58.429 "digest": "sha512", 01:23:58.429 "state": "completed" 01:23:58.429 }, 01:23:58.429 "cntlid": 113, 01:23:58.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:58.429 "listen_address": { 01:23:58.429 "adrfam": "IPv4", 01:23:58.429 "traddr": "10.0.0.3", 01:23:58.429 "trsvcid": "4420", 01:23:58.429 "trtype": "TCP" 01:23:58.429 }, 01:23:58.429 "peer_address": { 01:23:58.429 "adrfam": "IPv4", 01:23:58.429 "traddr": "10.0.0.1", 01:23:58.429 "trsvcid": "44552", 01:23:58.429 "trtype": "TCP" 01:23:58.429 }, 01:23:58.429 "qid": 0, 01:23:58.429 "state": "enabled", 01:23:58.429 "thread": "nvmf_tgt_poll_group_000" 01:23:58.429 } 01:23:58.429 ]' 01:23:58.429 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:58.429 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:23:58.429 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:58.429 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:23:58.429 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:58.429 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:58.429 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:58.429 05:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:23:58.687 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:23:58.687 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:23:59.252 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:23:59.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:23:59.252 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:23:59.252 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:59.252 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:59.252 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:59.252 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:23:59.252 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:23:59.252 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:23:59.252 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 01:23:59.252 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:23:59.252 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:23:59.252 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:23:59.252 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:23:59.252 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:23:59.252 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:59.252 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:59.252 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:59.252 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:59.252 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:59.252 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:59.252 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:23:59.817 01:23:59.817 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:23:59.817 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:23:59.817 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:23:59.817 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:59.817 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:23:59.817 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:59.817 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:23:59.817 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:59.817 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:23:59.817 { 01:23:59.817 "auth": { 01:23:59.817 "dhgroup": "ffdhe3072", 01:23:59.817 "digest": "sha512", 01:23:59.817 "state": "completed" 01:23:59.817 }, 01:23:59.817 "cntlid": 115, 01:23:59.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:23:59.817 "listen_address": { 01:23:59.817 "adrfam": "IPv4", 01:23:59.817 "traddr": "10.0.0.3", 01:23:59.817 "trsvcid": "4420", 01:23:59.817 "trtype": "TCP" 01:23:59.817 }, 01:23:59.817 "peer_address": { 01:23:59.818 "adrfam": "IPv4", 01:23:59.818 "traddr": "10.0.0.1", 01:23:59.818 "trsvcid": "44578", 01:23:59.818 "trtype": "TCP" 01:23:59.818 }, 01:23:59.818 "qid": 0, 01:23:59.818 "state": "enabled", 01:23:59.818 "thread": "nvmf_tgt_poll_group_000" 01:23:59.818 } 01:23:59.818 ]' 01:23:59.818 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:23:59.818 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:23:59.818 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:23:59.818 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:23:59.818 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:23:59.818 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:23:59.818 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:23:59.818 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:24:00.075 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:24:00.075 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:24:00.639 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:24:00.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:24:00.639 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:00.639 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:00.639 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:00.639 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:00.639 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:24:00.639 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:24:00.639 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:24:00.896 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 01:24:00.896 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:24:00.896 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:24:00.896 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:24:00.896 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:24:00.896 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:24:00.896 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:24:00.896 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:00.896 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:00.896 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:00.896 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:24:00.896 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:24:00.896 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:24:01.154 01:24:01.154 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:24:01.154 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:24:01.154 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:01.412 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:01.412 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:24:01.412 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:01.412 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:01.412 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:01.412 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:24:01.412 { 01:24:01.412 "auth": { 01:24:01.412 "dhgroup": "ffdhe3072", 01:24:01.412 "digest": "sha512", 01:24:01.412 "state": "completed" 01:24:01.412 }, 01:24:01.412 "cntlid": 117, 01:24:01.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:01.412 "listen_address": { 01:24:01.412 "adrfam": "IPv4", 01:24:01.412 "traddr": "10.0.0.3", 01:24:01.412 "trsvcid": "4420", 01:24:01.412 "trtype": "TCP" 01:24:01.412 }, 01:24:01.412 "peer_address": { 01:24:01.412 "adrfam": "IPv4", 01:24:01.412 "traddr": "10.0.0.1", 01:24:01.412 "trsvcid": "44600", 01:24:01.412 "trtype": "TCP" 01:24:01.412 }, 01:24:01.412 "qid": 0, 01:24:01.412 "state": "enabled", 01:24:01.412 "thread": "nvmf_tgt_poll_group_000" 01:24:01.412 } 01:24:01.412 ]' 01:24:01.412 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:24:01.412 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:24:01.412 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:24:01.412 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:24:01.412 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:24:01.412 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:24:01.412 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:24:01.412 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:24:01.672 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:24:01.672 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:24:02.311 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:24:02.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:24:02.311 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:02.311 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:02.311 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:02.311 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:02.311 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:24:02.311 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:24:02.311 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:24:02.311 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 01:24:02.311 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:24:02.311 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:24:02.311 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:24:02.311 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:24:02.311 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:24:02.312 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key3 01:24:02.312 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:02.312 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:02.312 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:02.312 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:24:02.312 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:24:02.312 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:24:02.570 01:24:02.570 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:24:02.570 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:02.570 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:24:02.830 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:02.830 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:24:02.830 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:02.830 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:02.830 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:02.830 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:24:02.830 { 01:24:02.830 "auth": { 01:24:02.830 "dhgroup": "ffdhe3072", 01:24:02.830 "digest": "sha512", 01:24:02.830 "state": "completed" 01:24:02.830 }, 01:24:02.830 "cntlid": 119, 01:24:02.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:02.830 "listen_address": { 01:24:02.830 "adrfam": "IPv4", 01:24:02.830 "traddr": "10.0.0.3", 01:24:02.830 "trsvcid": "4420", 01:24:02.830 "trtype": "TCP" 01:24:02.830 }, 01:24:02.830 "peer_address": { 01:24:02.830 "adrfam": "IPv4", 01:24:02.830 "traddr": "10.0.0.1", 01:24:02.830 "trsvcid": "44622", 01:24:02.830 "trtype": "TCP" 01:24:02.830 }, 01:24:02.830 "qid": 0, 01:24:02.830 "state": "enabled", 01:24:02.830 "thread": "nvmf_tgt_poll_group_000" 01:24:02.830 } 01:24:02.830 ]' 01:24:02.830 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:24:02.830 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:24:02.830 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:24:03.090 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:24:03.090 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:24:03.090 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:24:03.090 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:24:03.090 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:24:03.090 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:24:03.090 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:24:03.690 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:24:03.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:24:03.690 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:03.690 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:03.690 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:03.690 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:03.690 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:24:03.690 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:24:03.690 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:24:03.690 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:24:03.948 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 01:24:03.948 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:24:03.948 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:24:03.948 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:24:03.948 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:24:03.948 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:24:03.948 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:24:03.948 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:03.948 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:03.948 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:03.948 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:24:03.948 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:24:03.948 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:24:04.205 01:24:04.205 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:24:04.205 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:04.205 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:24:04.463 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:04.463 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:24:04.463 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:04.463 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:04.463 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:04.463 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:24:04.463 { 01:24:04.463 "auth": { 01:24:04.463 "dhgroup": "ffdhe4096", 01:24:04.463 "digest": "sha512", 01:24:04.463 "state": "completed" 01:24:04.463 }, 01:24:04.463 "cntlid": 121, 01:24:04.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:04.463 "listen_address": { 01:24:04.463 "adrfam": "IPv4", 01:24:04.463 "traddr": "10.0.0.3", 01:24:04.463 "trsvcid": "4420", 01:24:04.463 "trtype": "TCP" 01:24:04.463 }, 01:24:04.463 "peer_address": { 01:24:04.463 "adrfam": "IPv4", 01:24:04.463 "traddr": "10.0.0.1", 01:24:04.463 "trsvcid": "44640", 01:24:04.463 "trtype": "TCP" 01:24:04.463 }, 01:24:04.463 "qid": 0, 01:24:04.463 "state": "enabled", 01:24:04.463 "thread": "nvmf_tgt_poll_group_000" 01:24:04.463 } 01:24:04.463 ]' 01:24:04.463 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:24:04.463 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:24:04.463 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:24:04.720 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:24:04.720 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:24:04.720 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:24:04.720 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:24:04.720 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:24:04.720 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:24:04.720 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:24:05.285 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:24:05.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:24:05.285 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:05.285 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:05.285 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:05.542 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:05.542 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:24:05.542 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:24:05.542 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:24:05.542 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 01:24:05.542 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:24:05.542 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:24:05.542 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:24:05.542 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:24:05.542 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:24:05.542 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:24:05.542 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:05.542 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:05.542 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:05.542 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:24:05.542 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:24:05.542 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:24:05.799 01:24:05.799 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:24:05.799 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:24:05.799 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:06.057 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:06.057 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:24:06.057 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:06.057 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:06.057 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:06.057 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:24:06.057 { 01:24:06.057 "auth": { 01:24:06.057 "dhgroup": "ffdhe4096", 01:24:06.057 "digest": "sha512", 01:24:06.057 "state": "completed" 01:24:06.057 }, 01:24:06.057 "cntlid": 123, 01:24:06.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:06.057 "listen_address": { 01:24:06.057 "adrfam": "IPv4", 01:24:06.057 "traddr": "10.0.0.3", 01:24:06.057 "trsvcid": "4420", 01:24:06.057 "trtype": "TCP" 01:24:06.057 }, 01:24:06.057 "peer_address": { 01:24:06.057 "adrfam": "IPv4", 01:24:06.057 "traddr": "10.0.0.1", 01:24:06.057 "trsvcid": "44668", 01:24:06.057 "trtype": "TCP" 01:24:06.057 }, 01:24:06.057 "qid": 0, 01:24:06.057 "state": "enabled", 01:24:06.057 "thread": "nvmf_tgt_poll_group_000" 01:24:06.057 } 01:24:06.057 ]' 01:24:06.057 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:24:06.057 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:24:06.057 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:24:06.314 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:24:06.314 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:24:06.314 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:24:06.314 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:24:06.314 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:24:06.571 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:24:06.571 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:24:07.137 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:24:07.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:24:07.137 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:07.137 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:07.137 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:07.137 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:07.137 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:24:07.137 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:24:07.137 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:24:07.137 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 01:24:07.137 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:24:07.137 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:24:07.137 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:24:07.137 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:24:07.137 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:24:07.137 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:24:07.137 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:07.137 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:07.137 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:07.137 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:24:07.137 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:24:07.137 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:24:07.395 01:24:07.395 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:24:07.395 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:07.395 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:24:07.652 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:07.652 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:24:07.652 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:07.652 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:07.652 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:07.652 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:24:07.652 { 01:24:07.652 "auth": { 01:24:07.652 "dhgroup": "ffdhe4096", 01:24:07.652 "digest": "sha512", 01:24:07.652 "state": "completed" 01:24:07.652 }, 01:24:07.652 "cntlid": 125, 01:24:07.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:07.652 "listen_address": { 01:24:07.652 "adrfam": "IPv4", 01:24:07.652 "traddr": "10.0.0.3", 01:24:07.652 "trsvcid": "4420", 01:24:07.652 "trtype": "TCP" 01:24:07.652 }, 01:24:07.652 "peer_address": { 01:24:07.652 "adrfam": "IPv4", 01:24:07.652 "traddr": "10.0.0.1", 01:24:07.652 "trsvcid": "60208", 01:24:07.652 "trtype": "TCP" 01:24:07.652 }, 01:24:07.652 "qid": 0, 01:24:07.652 "state": "enabled", 01:24:07.652 "thread": "nvmf_tgt_poll_group_000" 01:24:07.652 } 01:24:07.652 ]' 01:24:07.652 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:24:07.652 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:24:07.652 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:24:07.910 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:24:07.910 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:24:07.910 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:24:07.910 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:24:07.910 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:24:08.167 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:24:08.168 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:24:08.733 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:24:08.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:24:08.733 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:08.733 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:08.733 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:08.733 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:08.733 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:24:08.733 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:24:08.733 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:24:08.733 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 01:24:08.733 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:24:08.733 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:24:08.733 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:24:08.733 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:24:08.733 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:24:08.733 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key3 01:24:08.733 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:08.733 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:08.733 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:08.733 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:24:08.733 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:24:08.733 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:24:08.990 01:24:09.248 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:24:09.248 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:24:09.248 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:09.248 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:09.248 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:24:09.248 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:09.248 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:09.248 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:09.248 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:24:09.248 { 01:24:09.248 "auth": { 01:24:09.248 "dhgroup": "ffdhe4096", 01:24:09.248 "digest": "sha512", 01:24:09.248 "state": "completed" 01:24:09.248 }, 01:24:09.248 "cntlid": 127, 01:24:09.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:09.248 "listen_address": { 01:24:09.248 "adrfam": "IPv4", 01:24:09.248 "traddr": "10.0.0.3", 01:24:09.248 "trsvcid": "4420", 01:24:09.248 "trtype": "TCP" 01:24:09.248 }, 01:24:09.248 "peer_address": { 01:24:09.248 "adrfam": "IPv4", 01:24:09.248 "traddr": "10.0.0.1", 01:24:09.248 "trsvcid": "60232", 01:24:09.248 "trtype": "TCP" 01:24:09.248 }, 01:24:09.248 "qid": 0, 01:24:09.248 "state": "enabled", 01:24:09.248 "thread": "nvmf_tgt_poll_group_000" 01:24:09.248 } 01:24:09.248 ]' 01:24:09.248 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:24:09.506 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:24:09.506 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:24:09.506 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:24:09.506 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:24:09.506 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:24:09.506 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:24:09.506 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:24:09.763 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:24:09.763 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:24:10.328 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:24:10.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:24:10.328 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:10.328 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:10.328 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:10.328 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:10.328 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:24:10.328 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:24:10.328 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:24:10.328 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:24:10.328 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 01:24:10.328 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:24:10.328 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:24:10.328 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:24:10.328 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:24:10.328 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:24:10.328 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:24:10.328 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:10.328 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:10.328 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:10.328 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:24:10.328 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:24:10.329 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:24:10.893 01:24:10.893 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:24:10.893 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:24:10.893 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:11.151 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:11.151 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:24:11.151 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:11.151 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:11.151 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:11.151 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:24:11.151 { 01:24:11.151 "auth": { 01:24:11.151 "dhgroup": "ffdhe6144", 01:24:11.151 "digest": "sha512", 01:24:11.151 "state": "completed" 01:24:11.151 }, 01:24:11.151 "cntlid": 129, 01:24:11.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:11.151 "listen_address": { 01:24:11.151 "adrfam": "IPv4", 01:24:11.151 "traddr": "10.0.0.3", 01:24:11.151 "trsvcid": "4420", 01:24:11.151 "trtype": "TCP" 01:24:11.151 }, 01:24:11.151 "peer_address": { 01:24:11.151 "adrfam": "IPv4", 01:24:11.151 "traddr": "10.0.0.1", 01:24:11.151 "trsvcid": "60264", 01:24:11.151 "trtype": "TCP" 01:24:11.151 }, 01:24:11.151 "qid": 0, 01:24:11.151 "state": "enabled", 01:24:11.151 "thread": "nvmf_tgt_poll_group_000" 01:24:11.151 } 01:24:11.151 ]' 01:24:11.151 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:24:11.151 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:24:11.151 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:24:11.151 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:24:11.151 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:24:11.151 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:24:11.151 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:24:11.151 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:24:11.409 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:24:11.409 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:24:11.974 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:24:11.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:24:11.974 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:11.974 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:11.974 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:11.974 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:11.974 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:24:11.974 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:24:11.974 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:24:12.231 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 01:24:12.231 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:24:12.231 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:24:12.231 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:24:12.231 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:24:12.231 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:24:12.231 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:24:12.232 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:12.232 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:12.232 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:12.232 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:24:12.232 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:24:12.232 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:24:12.488 01:24:12.488 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:24:12.488 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:24:12.488 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:12.744 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:12.744 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:24:12.744 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:12.744 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:12.744 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:12.744 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:24:12.744 { 01:24:12.744 "auth": { 01:24:12.744 "dhgroup": "ffdhe6144", 01:24:12.744 "digest": "sha512", 01:24:12.744 "state": "completed" 01:24:12.744 }, 01:24:12.744 "cntlid": 131, 01:24:12.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:12.744 "listen_address": { 01:24:12.744 "adrfam": "IPv4", 01:24:12.744 "traddr": "10.0.0.3", 01:24:12.744 "trsvcid": "4420", 01:24:12.744 "trtype": "TCP" 01:24:12.744 }, 01:24:12.744 "peer_address": { 01:24:12.744 "adrfam": "IPv4", 01:24:12.744 "traddr": "10.0.0.1", 01:24:12.744 "trsvcid": "60280", 01:24:12.744 "trtype": "TCP" 01:24:12.744 }, 01:24:12.744 "qid": 0, 01:24:12.744 "state": "enabled", 01:24:12.744 "thread": "nvmf_tgt_poll_group_000" 01:24:12.744 } 01:24:12.744 ]' 01:24:12.745 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:24:12.745 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:24:12.745 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:24:12.745 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:24:12.745 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:24:12.745 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:24:12.745 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:24:12.745 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:24:13.001 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:24:13.001 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:24:13.565 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:24:13.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:24:13.565 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:13.565 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:13.565 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:13.565 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:13.565 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:24:13.565 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:24:13.565 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:24:13.823 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 01:24:13.823 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:24:13.823 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:24:13.823 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:24:13.823 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:24:13.823 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:24:13.823 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:24:13.823 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:13.823 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:13.823 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:13.823 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:24:13.823 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:24:13.823 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:24:14.083 01:24:14.083 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:24:14.083 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:24:14.083 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:14.368 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:14.368 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:24:14.368 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:14.368 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:14.368 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:14.368 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:24:14.368 { 01:24:14.368 "auth": { 01:24:14.368 "dhgroup": "ffdhe6144", 01:24:14.368 "digest": "sha512", 01:24:14.368 "state": "completed" 01:24:14.368 }, 01:24:14.368 "cntlid": 133, 01:24:14.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:14.368 "listen_address": { 01:24:14.368 "adrfam": "IPv4", 01:24:14.368 "traddr": "10.0.0.3", 01:24:14.368 "trsvcid": "4420", 01:24:14.368 "trtype": "TCP" 01:24:14.368 }, 01:24:14.368 "peer_address": { 01:24:14.368 "adrfam": "IPv4", 01:24:14.368 "traddr": "10.0.0.1", 01:24:14.368 "trsvcid": "60310", 01:24:14.368 "trtype": "TCP" 01:24:14.368 }, 01:24:14.368 "qid": 0, 01:24:14.368 "state": "enabled", 01:24:14.368 "thread": "nvmf_tgt_poll_group_000" 01:24:14.368 } 01:24:14.368 ]' 01:24:14.368 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:24:14.368 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:24:14.368 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:24:14.368 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:24:14.368 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:24:14.368 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:24:14.368 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:24:14.368 05:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:24:14.630 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:24:14.630 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:24:15.194 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:24:15.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:24:15.194 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:15.194 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:15.194 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:15.194 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:15.194 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:24:15.194 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:24:15.194 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:24:15.452 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 01:24:15.452 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:24:15.452 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:24:15.452 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:24:15.452 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:24:15.452 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:24:15.452 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key3 01:24:15.452 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:15.452 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:15.452 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:15.452 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:24:15.452 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:24:15.452 05:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:24:16.017 01:24:16.017 05:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:24:16.017 05:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:24:16.017 05:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:16.017 05:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:16.017 05:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:24:16.017 05:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:16.017 05:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:16.017 05:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:16.017 05:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:24:16.017 { 01:24:16.017 "auth": { 01:24:16.017 "dhgroup": "ffdhe6144", 01:24:16.017 "digest": "sha512", 01:24:16.017 "state": "completed" 01:24:16.017 }, 01:24:16.017 "cntlid": 135, 01:24:16.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:16.017 "listen_address": { 01:24:16.017 "adrfam": "IPv4", 01:24:16.017 "traddr": "10.0.0.3", 01:24:16.017 "trsvcid": "4420", 01:24:16.017 "trtype": "TCP" 01:24:16.017 }, 01:24:16.017 "peer_address": { 01:24:16.017 "adrfam": "IPv4", 01:24:16.017 "traddr": "10.0.0.1", 01:24:16.017 "trsvcid": "60354", 01:24:16.017 "trtype": "TCP" 01:24:16.017 }, 01:24:16.017 "qid": 0, 01:24:16.017 "state": "enabled", 01:24:16.017 "thread": "nvmf_tgt_poll_group_000" 01:24:16.017 } 01:24:16.017 ]' 01:24:16.017 05:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:24:16.017 05:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:24:16.017 05:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:24:16.275 05:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:24:16.275 05:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:24:16.275 05:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:24:16.275 05:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:24:16.275 05:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:24:16.275 05:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:24:16.275 05:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:24:16.841 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:24:16.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:24:16.841 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:16.841 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:16.841 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:16.841 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:16.841 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:24:16.841 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:24:16.841 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:24:16.841 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:24:17.099 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 01:24:17.099 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:24:17.100 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:24:17.100 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:24:17.100 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:24:17.100 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:24:17.100 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:24:17.100 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:17.100 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:17.100 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:17.100 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:24:17.100 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:24:17.100 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:24:17.665 01:24:17.665 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:24:17.665 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:24:17.665 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:17.923 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:17.923 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:24:17.923 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:17.923 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:17.923 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:17.923 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:24:17.923 { 01:24:17.923 "auth": { 01:24:17.923 "dhgroup": "ffdhe8192", 01:24:17.923 "digest": "sha512", 01:24:17.923 "state": "completed" 01:24:17.923 }, 01:24:17.923 "cntlid": 137, 01:24:17.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:17.923 "listen_address": { 01:24:17.923 "adrfam": "IPv4", 01:24:17.923 "traddr": "10.0.0.3", 01:24:17.923 "trsvcid": "4420", 01:24:17.923 "trtype": "TCP" 01:24:17.923 }, 01:24:17.923 "peer_address": { 01:24:17.923 "adrfam": "IPv4", 01:24:17.923 "traddr": "10.0.0.1", 01:24:17.923 "trsvcid": "50608", 01:24:17.923 "trtype": "TCP" 01:24:17.923 }, 01:24:17.923 "qid": 0, 01:24:17.923 "state": "enabled", 01:24:17.923 "thread": "nvmf_tgt_poll_group_000" 01:24:17.923 } 01:24:17.923 ]' 01:24:17.923 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:24:17.923 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:24:17.923 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:24:17.923 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:24:17.923 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:24:17.923 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:24:17.923 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:24:17.923 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:24:18.181 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:24:18.181 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:24:18.747 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:24:18.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:24:18.747 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:18.747 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:18.747 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:18.747 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:18.747 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:24:18.747 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:24:18.747 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:24:19.004 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 01:24:19.004 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:24:19.004 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:24:19.004 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:24:19.004 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:24:19.004 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:24:19.004 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:24:19.004 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:19.004 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:19.004 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:19.004 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:24:19.004 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:24:19.004 05:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:24:19.571 01:24:19.571 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:24:19.571 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:19.571 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:24:19.829 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:19.829 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:24:19.829 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:19.829 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:19.829 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:19.829 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:24:19.829 { 01:24:19.829 "auth": { 01:24:19.829 "dhgroup": "ffdhe8192", 01:24:19.829 "digest": "sha512", 01:24:19.829 "state": "completed" 01:24:19.829 }, 01:24:19.829 "cntlid": 139, 01:24:19.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:19.829 "listen_address": { 01:24:19.829 "adrfam": "IPv4", 01:24:19.829 "traddr": "10.0.0.3", 01:24:19.829 "trsvcid": "4420", 01:24:19.829 "trtype": "TCP" 01:24:19.829 }, 01:24:19.829 "peer_address": { 01:24:19.829 "adrfam": "IPv4", 01:24:19.829 "traddr": "10.0.0.1", 01:24:19.829 "trsvcid": "50644", 01:24:19.829 "trtype": "TCP" 01:24:19.829 }, 01:24:19.829 "qid": 0, 01:24:19.829 "state": "enabled", 01:24:19.829 "thread": "nvmf_tgt_poll_group_000" 01:24:19.829 } 01:24:19.829 ]' 01:24:19.829 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:24:19.829 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:24:19.829 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:24:19.829 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:24:19.829 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:24:19.829 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:24:19.829 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:24:19.829 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:24:20.087 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:24:20.087 05:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: --dhchap-ctrl-secret DHHC-1:02:NTA4N2QzMjFmYWIxODc1NGM3YzQwOWI5OWQ3OWUyNzVlZDczNDYwNzY3ZDUwZTkyLVakcQ==: 01:24:20.652 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:24:20.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:24:20.652 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:20.652 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:20.652 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:20.652 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:20.652 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:24:20.652 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:24:20.652 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:24:20.911 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 01:24:20.911 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:24:20.911 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:24:20.911 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:24:20.911 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:24:20.911 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:24:20.911 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:24:20.911 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:20.911 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:20.911 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:20.911 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:24:20.911 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:24:20.911 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:24:21.477 01:24:21.477 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:24:21.477 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:24:21.477 05:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:21.735 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:21.735 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:24:21.735 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:21.735 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:21.735 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:21.735 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:24:21.735 { 01:24:21.735 "auth": { 01:24:21.735 "dhgroup": "ffdhe8192", 01:24:21.735 "digest": "sha512", 01:24:21.735 "state": "completed" 01:24:21.735 }, 01:24:21.735 "cntlid": 141, 01:24:21.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:21.735 "listen_address": { 01:24:21.735 "adrfam": "IPv4", 01:24:21.735 "traddr": "10.0.0.3", 01:24:21.735 "trsvcid": "4420", 01:24:21.735 "trtype": "TCP" 01:24:21.735 }, 01:24:21.735 "peer_address": { 01:24:21.735 "adrfam": "IPv4", 01:24:21.735 "traddr": "10.0.0.1", 01:24:21.735 "trsvcid": "50672", 01:24:21.735 "trtype": "TCP" 01:24:21.735 }, 01:24:21.735 "qid": 0, 01:24:21.735 "state": "enabled", 01:24:21.735 "thread": "nvmf_tgt_poll_group_000" 01:24:21.735 } 01:24:21.735 ]' 01:24:21.735 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:24:21.735 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:24:21.735 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:24:21.735 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:24:21.735 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:24:21.735 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:24:21.735 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:24:21.735 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:24:21.992 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:24:21.992 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:01:YjU3ZWFhNDkyMmM1YTk1MjUzMWFjZDgyMTgzNGE1MmXjRIAD: 01:24:22.558 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:24:22.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:24:22.558 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:22.558 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:22.558 05:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:22.558 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:22.558 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:24:22.558 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:24:22.558 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:24:22.814 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 01:24:22.814 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:24:22.814 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:24:22.814 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:24:22.814 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:24:22.814 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:24:22.815 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key3 01:24:22.815 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:22.815 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:22.815 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:22.815 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:24:22.815 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:24:22.815 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:24:23.378 01:24:23.378 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:24:23.378 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:24:23.378 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:23.378 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:23.378 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:24:23.378 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:23.378 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:23.378 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:23.378 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:24:23.378 { 01:24:23.378 "auth": { 01:24:23.378 "dhgroup": "ffdhe8192", 01:24:23.378 "digest": "sha512", 01:24:23.378 "state": "completed" 01:24:23.378 }, 01:24:23.378 "cntlid": 143, 01:24:23.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:23.378 "listen_address": { 01:24:23.378 "adrfam": "IPv4", 01:24:23.378 "traddr": "10.0.0.3", 01:24:23.378 "trsvcid": "4420", 01:24:23.378 "trtype": "TCP" 01:24:23.378 }, 01:24:23.378 "peer_address": { 01:24:23.378 "adrfam": "IPv4", 01:24:23.378 "traddr": "10.0.0.1", 01:24:23.378 "trsvcid": "50710", 01:24:23.378 "trtype": "TCP" 01:24:23.378 }, 01:24:23.378 "qid": 0, 01:24:23.378 "state": "enabled", 01:24:23.378 "thread": "nvmf_tgt_poll_group_000" 01:24:23.378 } 01:24:23.378 ]' 01:24:23.378 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:24:23.636 05:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:24:23.636 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:24:23.636 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:24:23.636 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:24:23.636 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:24:23.636 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:24:23.636 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:24:23.892 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:24:23.892 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:24:24.456 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:24:24.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:24:24.456 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:24.456 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:24.456 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:24.456 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:24.456 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 01:24:24.456 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 01:24:24.456 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 01:24:24.456 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:24:24.456 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:24:24.456 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:24:24.456 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 01:24:24.456 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:24:24.456 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:24:24.456 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:24:24.456 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:24:24.456 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:24:24.456 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:24:24.456 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:24.456 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:24.456 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:24.456 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:24:24.456 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:24:24.456 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:24:25.021 01:24:25.021 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:24:25.021 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:24:25.021 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:25.279 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:25.279 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:24:25.279 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:25.279 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:25.279 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:25.280 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:24:25.280 { 01:24:25.280 "auth": { 01:24:25.280 "dhgroup": "ffdhe8192", 01:24:25.280 "digest": "sha512", 01:24:25.280 "state": "completed" 01:24:25.280 }, 01:24:25.280 "cntlid": 145, 01:24:25.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:25.280 "listen_address": { 01:24:25.280 "adrfam": "IPv4", 01:24:25.280 "traddr": "10.0.0.3", 01:24:25.280 "trsvcid": "4420", 01:24:25.280 "trtype": "TCP" 01:24:25.280 }, 01:24:25.280 "peer_address": { 01:24:25.280 "adrfam": "IPv4", 01:24:25.280 "traddr": "10.0.0.1", 01:24:25.280 "trsvcid": "50748", 01:24:25.280 "trtype": "TCP" 01:24:25.280 }, 01:24:25.280 "qid": 0, 01:24:25.280 "state": "enabled", 01:24:25.280 "thread": "nvmf_tgt_poll_group_000" 01:24:25.280 } 01:24:25.280 ]' 01:24:25.280 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:24:25.280 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:24:25.280 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:24:25.280 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:24:25.280 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:24:25.280 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:24:25.280 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:24:25.280 05:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:24:25.557 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:24:25.557 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:00:YzIxMGRiNDFlYjk0YTM1YzNhOTdjNTU1YzgzMTM0Y2UxZjVmNDFkZDFiMmQ0NjUz1F+s5w==: --dhchap-ctrl-secret DHHC-1:03:M2Y3ODIxNTczMjkxMzQzZmFhYjkzMTZhODAxNjc3MDEwM2IzYjA0OWVlZmRmYjFiMDVjN2E0NjA5MmRmZjQ1MNZqFog=: 01:24:26.146 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:24:26.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:24:26.146 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:26.146 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:26.146 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:26.146 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:26.146 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 01:24:26.146 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:26.146 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:26.146 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:26.146 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 01:24:26.146 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:24:26.146 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 01:24:26.146 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:24:26.146 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:24:26.146 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:24:26.146 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:24:26.146 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 01:24:26.146 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 01:24:26.146 05:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 01:24:26.712 2024/12/09 05:19:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:24:26.712 request: 01:24:26.712 { 01:24:26.712 "method": "bdev_nvme_attach_controller", 01:24:26.712 "params": { 01:24:26.712 "name": "nvme0", 01:24:26.712 "trtype": "tcp", 01:24:26.712 "traddr": "10.0.0.3", 01:24:26.712 "adrfam": "ipv4", 01:24:26.712 "trsvcid": "4420", 01:24:26.712 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:24:26.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:26.712 "prchk_reftag": false, 01:24:26.712 "prchk_guard": false, 01:24:26.712 "hdgst": false, 01:24:26.712 "ddgst": false, 01:24:26.712 "dhchap_key": "key2", 01:24:26.712 "allow_unrecognized_csi": false 01:24:26.712 } 01:24:26.712 } 01:24:26.712 Got JSON-RPC error response 01:24:26.712 GoRPCClient: error on JSON-RPC call 01:24:26.712 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:24:26.712 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:24:26.712 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:24:26.712 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:24:26.712 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:26.712 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:26.712 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:26.712 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:26.712 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:24:26.712 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:26.712 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:26.712 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:26.712 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:24:26.712 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:24:26.712 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:24:26.712 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:24:26.712 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:24:26.712 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:24:26.712 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:24:26.712 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:24:26.712 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:24:26.712 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:24:27.279 2024/12/09 05:19:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:24:27.279 request: 01:24:27.279 { 01:24:27.279 "method": "bdev_nvme_attach_controller", 01:24:27.279 "params": { 01:24:27.279 "name": "nvme0", 01:24:27.279 "trtype": "tcp", 01:24:27.279 "traddr": "10.0.0.3", 01:24:27.279 "adrfam": "ipv4", 01:24:27.279 "trsvcid": "4420", 01:24:27.279 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:24:27.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:27.279 "prchk_reftag": false, 01:24:27.279 "prchk_guard": false, 01:24:27.279 "hdgst": false, 01:24:27.279 "ddgst": false, 01:24:27.279 "dhchap_key": "key1", 01:24:27.279 "dhchap_ctrlr_key": "ckey2", 01:24:27.279 "allow_unrecognized_csi": false 01:24:27.279 } 01:24:27.279 } 01:24:27.279 Got JSON-RPC error response 01:24:27.279 GoRPCClient: error on JSON-RPC call 01:24:27.279 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:24:27.279 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:24:27.279 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:24:27.279 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:24:27.279 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:27.279 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:27.279 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:27.279 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:27.279 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 01:24:27.279 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:27.279 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:27.279 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:27.279 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:24:27.279 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:24:27.279 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:24:27.279 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:24:27.279 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:24:27.279 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:24:27.279 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:24:27.279 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:24:27.279 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:24:27.279 05:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:24:27.845 2024/12/09 05:19:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:24:27.845 request: 01:24:27.845 { 01:24:27.845 "method": "bdev_nvme_attach_controller", 01:24:27.845 "params": { 01:24:27.845 "name": "nvme0", 01:24:27.845 "trtype": "tcp", 01:24:27.845 "traddr": "10.0.0.3", 01:24:27.845 "adrfam": "ipv4", 01:24:27.845 "trsvcid": "4420", 01:24:27.845 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:24:27.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:27.845 "prchk_reftag": false, 01:24:27.845 "prchk_guard": false, 01:24:27.845 "hdgst": false, 01:24:27.845 "ddgst": false, 01:24:27.845 "dhchap_key": "key1", 01:24:27.845 "dhchap_ctrlr_key": "ckey1", 01:24:27.845 "allow_unrecognized_csi": false 01:24:27.845 } 01:24:27.845 } 01:24:27.845 Got JSON-RPC error response 01:24:27.845 GoRPCClient: error on JSON-RPC call 01:24:27.845 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:24:27.845 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:24:27.845 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:24:27.845 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:24:27.845 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:27.845 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:27.845 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:27.845 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:27.845 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 75963 01:24:27.845 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 75963 ']' 01:24:27.845 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 75963 01:24:27.845 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 01:24:27.845 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:24:27.845 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75963 01:24:27.845 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:24:27.845 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:24:27.845 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75963' 01:24:27.846 killing process with pid 75963 01:24:27.846 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 75963 01:24:27.846 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 75963 01:24:27.846 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 01:24:27.846 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:24:27.846 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 01:24:27.846 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:27.846 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=80525 01:24:27.846 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 80525 01:24:27.846 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 80525 ']' 01:24:27.846 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 01:24:27.846 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:24:27.846 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:24:27.846 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:24:27.846 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:24:27.846 05:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:28.778 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:24:28.778 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 01:24:28.778 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:24:28.778 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 01:24:28.778 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:28.778 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:24:28.778 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 01:24:28.778 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 80525 01:24:28.778 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 80525 ']' 01:24:28.778 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:24:28.778 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:24:28.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:24:28.778 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:24:28.778 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:24:28.778 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:29.037 null0 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.aSf 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.wtm ]] 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wtm 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.O4c 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.UQ9 ]] 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.UQ9 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.bWT 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.ks1 ]] 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ks1 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.psF 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key3 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:24:29.037 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:24:29.971 nvme0n1 01:24:29.971 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:24:29.971 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:24:29.971 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:30.230 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:30.230 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:24:30.230 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:30.230 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:30.230 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:30.230 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:24:30.230 { 01:24:30.230 "auth": { 01:24:30.230 "dhgroup": "ffdhe8192", 01:24:30.230 "digest": "sha512", 01:24:30.230 "state": "completed" 01:24:30.230 }, 01:24:30.230 "cntlid": 1, 01:24:30.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:30.230 "listen_address": { 01:24:30.230 "adrfam": "IPv4", 01:24:30.230 "traddr": "10.0.0.3", 01:24:30.230 "trsvcid": "4420", 01:24:30.230 "trtype": "TCP" 01:24:30.230 }, 01:24:30.230 "peer_address": { 01:24:30.230 "adrfam": "IPv4", 01:24:30.230 "traddr": "10.0.0.1", 01:24:30.230 "trsvcid": "44710", 01:24:30.230 "trtype": "TCP" 01:24:30.230 }, 01:24:30.230 "qid": 0, 01:24:30.230 "state": "enabled", 01:24:30.230 "thread": "nvmf_tgt_poll_group_000" 01:24:30.230 } 01:24:30.230 ]' 01:24:30.230 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:24:30.230 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:24:30.230 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:24:30.230 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:24:30.230 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:24:30.230 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:24:30.230 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:24:30.230 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:24:30.488 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:24:30.488 05:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:24:31.053 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:24:31.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:24:31.053 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:31.053 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:31.053 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:31.053 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:31.053 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key3 01:24:31.053 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:31.053 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:31.053 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:31.053 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 01:24:31.053 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 01:24:31.311 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 01:24:31.311 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:24:31.311 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 01:24:31.311 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:24:31.311 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:24:31.311 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:24:31.311 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:24:31.311 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 01:24:31.311 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:24:31.311 05:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:24:31.630 2024/12/09 05:19:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:24:31.630 request: 01:24:31.630 { 01:24:31.630 "method": "bdev_nvme_attach_controller", 01:24:31.630 "params": { 01:24:31.630 "name": "nvme0", 01:24:31.630 "trtype": "tcp", 01:24:31.630 "traddr": "10.0.0.3", 01:24:31.630 "adrfam": "ipv4", 01:24:31.630 "trsvcid": "4420", 01:24:31.630 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:24:31.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:31.630 "prchk_reftag": false, 01:24:31.630 "prchk_guard": false, 01:24:31.630 "hdgst": false, 01:24:31.630 "ddgst": false, 01:24:31.630 "dhchap_key": "key3", 01:24:31.630 "allow_unrecognized_csi": false 01:24:31.630 } 01:24:31.630 } 01:24:31.630 Got JSON-RPC error response 01:24:31.630 GoRPCClient: error on JSON-RPC call 01:24:31.630 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:24:31.630 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:24:31.630 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:24:31.630 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:24:31.630 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 01:24:31.630 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 01:24:31.630 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 01:24:31.630 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 01:24:31.888 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 01:24:31.888 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:24:31.888 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 01:24:31.888 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:24:31.888 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:24:31.888 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:24:31.888 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:24:31.889 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 01:24:31.889 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:24:31.889 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:24:31.889 2024/12/09 05:19:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:24:31.889 request: 01:24:31.889 { 01:24:31.889 "method": "bdev_nvme_attach_controller", 01:24:31.889 "params": { 01:24:31.889 "name": "nvme0", 01:24:31.889 "trtype": "tcp", 01:24:31.889 "traddr": "10.0.0.3", 01:24:31.889 "adrfam": "ipv4", 01:24:31.889 "trsvcid": "4420", 01:24:31.889 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:24:31.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:31.889 "prchk_reftag": false, 01:24:31.889 "prchk_guard": false, 01:24:31.889 "hdgst": false, 01:24:31.889 "ddgst": false, 01:24:31.889 "dhchap_key": "key3", 01:24:31.889 "allow_unrecognized_csi": false 01:24:31.889 } 01:24:31.889 } 01:24:31.889 Got JSON-RPC error response 01:24:31.889 GoRPCClient: error on JSON-RPC call 01:24:31.889 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:24:31.889 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:24:31.889 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:24:31.889 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:24:31.889 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 01:24:31.889 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 01:24:31.889 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 01:24:31.889 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:24:31.889 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:24:31.889 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:24:32.149 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:32.149 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:32.149 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:32.149 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:32.149 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:32.149 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:32.149 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:32.149 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:32.149 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:24:32.149 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:24:32.149 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:24:32.149 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:24:32.149 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:24:32.149 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:24:32.149 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:24:32.149 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:24:32.149 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:24:32.149 05:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:24:32.719 2024/12/09 05:19:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:24:32.719 request: 01:24:32.719 { 01:24:32.719 "method": "bdev_nvme_attach_controller", 01:24:32.719 "params": { 01:24:32.719 "name": "nvme0", 01:24:32.719 "trtype": "tcp", 01:24:32.719 "traddr": "10.0.0.3", 01:24:32.719 "adrfam": "ipv4", 01:24:32.719 "trsvcid": "4420", 01:24:32.719 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:24:32.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:32.719 "prchk_reftag": false, 01:24:32.719 "prchk_guard": false, 01:24:32.719 "hdgst": false, 01:24:32.719 "ddgst": false, 01:24:32.719 "dhchap_key": "key0", 01:24:32.719 "dhchap_ctrlr_key": "key1", 01:24:32.719 "allow_unrecognized_csi": false 01:24:32.719 } 01:24:32.719 } 01:24:32.719 Got JSON-RPC error response 01:24:32.719 GoRPCClient: error on JSON-RPC call 01:24:32.719 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:24:32.719 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:24:32.719 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:24:32.719 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:24:32.719 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 01:24:32.719 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 01:24:32.719 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 01:24:32.719 nvme0n1 01:24:32.979 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 01:24:32.979 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:32.979 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 01:24:32.979 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:32.979 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 01:24:32.979 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:24:33.238 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 01:24:33.238 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:33.238 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:33.238 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:33.238 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 01:24:33.238 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 01:24:33.238 05:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 01:24:34.176 nvme0n1 01:24:34.176 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 01:24:34.176 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 01:24:34.176 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:34.176 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:34.176 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key2 --dhchap-ctrlr-key key3 01:24:34.176 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:34.176 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:34.176 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:34.434 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 01:24:34.434 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 01:24:34.434 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:34.434 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:34.434 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:24:34.434 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid 717d4fb6-9228-45ae-9435-1e6ea8cff784 -l 0 --dhchap-secret DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: --dhchap-ctrl-secret DHHC-1:03:MTNmNzgyODE5YTVlMGUyOWYwODUyZmVjYjU2ODMzZjRkNzg3ODFmMWU5ZGE2NDNkYzlhMTA0MThlNzc4NWMwN+z6/Gk=: 01:24:34.999 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 01:24:34.999 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 01:24:34.999 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 01:24:34.999 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 01:24:34.999 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 01:24:34.999 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 01:24:34.999 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 01:24:34.999 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 01:24:34.999 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:24:35.294 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 01:24:35.294 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:24:35.294 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 01:24:35.294 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:24:35.294 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:24:35.294 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:24:35.294 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:24:35.294 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 01:24:35.294 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 01:24:35.294 05:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 01:24:35.859 2024/12/09 05:19:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:24:35.859 request: 01:24:35.859 { 01:24:35.859 "method": "bdev_nvme_attach_controller", 01:24:35.859 "params": { 01:24:35.859 "name": "nvme0", 01:24:35.859 "trtype": "tcp", 01:24:35.859 "traddr": "10.0.0.3", 01:24:35.859 "adrfam": "ipv4", 01:24:35.859 "trsvcid": "4420", 01:24:35.859 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:24:35.859 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784", 01:24:35.859 "prchk_reftag": false, 01:24:35.859 "prchk_guard": false, 01:24:35.859 "hdgst": false, 01:24:35.859 "ddgst": false, 01:24:35.859 "dhchap_key": "key1", 01:24:35.859 "allow_unrecognized_csi": false 01:24:35.859 } 01:24:35.859 } 01:24:35.859 Got JSON-RPC error response 01:24:35.859 GoRPCClient: error on JSON-RPC call 01:24:35.859 05:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:24:35.859 05:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:24:35.859 05:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:24:35.859 05:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:24:35.859 05:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 01:24:35.859 05:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 01:24:35.859 05:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 01:24:36.793 nvme0n1 01:24:36.793 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 01:24:36.793 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:36.793 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 01:24:36.793 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:36.793 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 01:24:36.793 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:24:37.050 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:37.050 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:37.050 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:37.050 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:37.050 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 01:24:37.050 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 01:24:37.050 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 01:24:37.307 nvme0n1 01:24:37.307 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 01:24:37.307 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:37.307 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 01:24:37.565 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:37.565 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 01:24:37.565 05:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:24:37.821 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 --dhchap-ctrlr-key key3 01:24:37.821 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:37.821 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:37.821 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:37.821 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: '' 2s 01:24:37.822 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 01:24:37.822 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 01:24:37.822 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: 01:24:37.822 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 01:24:37.822 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 01:24:37.822 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 01:24:37.822 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: ]] 01:24:37.822 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZjE5NjFjMjhkNDUxZGY1YjQwMDk1ZWVhZDc1ZGQzNGW0R6fj: 01:24:37.822 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 01:24:37.822 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 01:24:37.822 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key1 --dhchap-ctrlr-key key2 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: 2s 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: ]] 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YWY3NmRjNWVlYTRiODZkM2MwYTMzMTdlYWQ5OTk4YWYxYjNlMzJmY2RkZmU3NTZh0/OK5g==: 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 01:24:39.718 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 01:24:42.262 05:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 01:24:42.262 05:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 01:24:42.262 05:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:24:42.262 05:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 01:24:42.262 05:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 01:24:42.262 05:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 01:24:42.262 05:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 01:24:42.262 05:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:24:42.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:24:42.262 05:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key0 --dhchap-ctrlr-key key1 01:24:42.262 05:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:42.262 05:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:42.262 05:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:42.262 05:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:24:42.262 05:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:24:42.262 05:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:24:42.520 nvme0n1 01:24:42.520 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key2 --dhchap-ctrlr-key key3 01:24:42.520 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:42.520 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:42.520 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:42.520 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 01:24:42.520 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 01:24:43.085 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 01:24:43.085 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 01:24:43.085 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:43.342 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:43.342 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:43.342 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:43.342 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:43.342 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:43.342 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 01:24:43.342 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 01:24:43.599 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 01:24:43.600 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 01:24:43.600 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:43.857 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:24:43.857 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key2 --dhchap-ctrlr-key key3 01:24:43.857 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:43.857 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:43.857 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:43.857 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 01:24:43.857 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:24:43.857 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 01:24:43.857 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 01:24:43.857 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:24:43.857 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 01:24:43.857 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:24:43.857 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 01:24:43.857 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 01:24:44.422 2024/12/09 05:19:35 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 01:24:44.422 request: 01:24:44.422 { 01:24:44.422 "method": "bdev_nvme_set_keys", 01:24:44.422 "params": { 01:24:44.422 "name": "nvme0", 01:24:44.422 "dhchap_key": "key1", 01:24:44.422 "dhchap_ctrlr_key": "key3" 01:24:44.422 } 01:24:44.422 } 01:24:44.422 Got JSON-RPC error response 01:24:44.422 GoRPCClient: error on JSON-RPC call 01:24:44.422 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:24:44.422 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:24:44.422 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:24:44.422 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:24:44.422 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 01:24:44.422 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 01:24:44.422 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:44.422 05:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 01:24:44.422 05:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 01:24:45.793 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 01:24:45.794 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 01:24:45.794 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:45.794 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 01:24:45.794 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key0 --dhchap-ctrlr-key key1 01:24:45.794 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:45.794 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:45.794 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:45.794 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:24:45.794 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:24:45.794 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:24:46.727 nvme0n1 01:24:46.727 05:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --dhchap-key key2 --dhchap-ctrlr-key key3 01:24:46.727 05:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:46.727 05:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:46.727 05:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:46.727 05:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 01:24:46.727 05:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:24:46.727 05:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 01:24:46.727 05:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 01:24:46.727 05:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:24:46.727 05:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 01:24:46.727 05:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:24:46.727 05:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 01:24:46.727 05:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 01:24:46.984 2024/12/09 05:19:38 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 01:24:46.984 request: 01:24:46.984 { 01:24:46.984 "method": "bdev_nvme_set_keys", 01:24:46.984 "params": { 01:24:46.984 "name": "nvme0", 01:24:46.984 "dhchap_key": "key2", 01:24:46.984 "dhchap_ctrlr_key": "key0" 01:24:46.984 } 01:24:46.984 } 01:24:46.984 Got JSON-RPC error response 01:24:46.984 GoRPCClient: error on JSON-RPC call 01:24:46.984 05:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:24:46.984 05:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:24:46.984 05:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:24:46.984 05:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:24:46.984 05:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 01:24:46.984 05:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:46.984 05:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 01:24:47.241 05:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 01:24:47.241 05:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 01:24:48.174 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 01:24:48.174 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 01:24:48.174 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:24:48.432 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 01:24:48.432 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 01:24:48.432 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 01:24:48.432 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 76007 01:24:48.432 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 76007 ']' 01:24:48.432 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 76007 01:24:48.432 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 01:24:48.432 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:24:48.432 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76007 01:24:48.432 killing process with pid 76007 01:24:48.432 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:24:48.432 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:24:48.432 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76007' 01:24:48.432 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 76007 01:24:48.432 05:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 76007 01:24:48.690 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 01:24:48.690 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 01:24:48.690 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 01:24:48.690 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:24:48.690 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 01:24:48.690 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 01:24:48.690 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:24:48.690 rmmod nvme_tcp 01:24:48.690 rmmod nvme_fabrics 01:24:48.690 rmmod nvme_keyring 01:24:48.690 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:24:48.690 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 01:24:48.690 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 01:24:48.690 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 80525 ']' 01:24:48.690 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 80525 01:24:48.690 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 80525 ']' 01:24:48.690 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 80525 01:24:48.690 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 01:24:48.690 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:24:48.690 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80525 01:24:48.690 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:24:48.690 killing process with pid 80525 01:24:48.690 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:24:48.690 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80525' 01:24:48.690 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 80525 01:24:48.690 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 80525 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 01:24:48.948 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.aSf /tmp/spdk.key-sha256.O4c /tmp/spdk.key-sha384.bWT /tmp/spdk.key-sha512.psF /tmp/spdk.key-sha512.wtm /tmp/spdk.key-sha384.UQ9 /tmp/spdk.key-sha256.ks1 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 01:24:49.207 01:24:49.207 real 2m30.884s 01:24:49.207 user 6m0.968s 01:24:49.207 sys 0m15.522s 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 01:24:49.207 ************************************ 01:24:49.207 END TEST nvmf_auth_target 01:24:49.207 ************************************ 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:24:49.207 ************************************ 01:24:49.207 START TEST nvmf_bdevio_no_huge 01:24:49.207 ************************************ 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 01:24:49.207 * Looking for test storage... 01:24:49.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 01:24:49.207 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:24:49.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:49.208 --rc genhtml_branch_coverage=1 01:24:49.208 --rc genhtml_function_coverage=1 01:24:49.208 --rc genhtml_legend=1 01:24:49.208 --rc geninfo_all_blocks=1 01:24:49.208 --rc geninfo_unexecuted_blocks=1 01:24:49.208 01:24:49.208 ' 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:24:49.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:49.208 --rc genhtml_branch_coverage=1 01:24:49.208 --rc genhtml_function_coverage=1 01:24:49.208 --rc genhtml_legend=1 01:24:49.208 --rc geninfo_all_blocks=1 01:24:49.208 --rc geninfo_unexecuted_blocks=1 01:24:49.208 01:24:49.208 ' 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:24:49.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:49.208 --rc genhtml_branch_coverage=1 01:24:49.208 --rc genhtml_function_coverage=1 01:24:49.208 --rc genhtml_legend=1 01:24:49.208 --rc geninfo_all_blocks=1 01:24:49.208 --rc geninfo_unexecuted_blocks=1 01:24:49.208 01:24:49.208 ' 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:24:49.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:49.208 --rc genhtml_branch_coverage=1 01:24:49.208 --rc genhtml_function_coverage=1 01:24:49.208 --rc genhtml_legend=1 01:24:49.208 --rc geninfo_all_blocks=1 01:24:49.208 --rc geninfo_unexecuted_blocks=1 01:24:49.208 01:24:49.208 ' 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:24:49.208 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 01:24:49.208 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:24:49.209 Cannot find device "nvmf_init_br" 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 01:24:49.209 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:24:49.467 Cannot find device "nvmf_init_br2" 01:24:49.467 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 01:24:49.467 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:24:49.467 Cannot find device "nvmf_tgt_br" 01:24:49.467 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 01:24:49.467 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:24:49.467 Cannot find device "nvmf_tgt_br2" 01:24:49.467 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 01:24:49.467 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:24:49.467 Cannot find device "nvmf_init_br" 01:24:49.467 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:24:49.468 Cannot find device "nvmf_init_br2" 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:24:49.468 Cannot find device "nvmf_tgt_br" 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:24:49.468 Cannot find device "nvmf_tgt_br2" 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:24:49.468 Cannot find device "nvmf_br" 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:24:49.468 Cannot find device "nvmf_init_if" 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:24:49.468 Cannot find device "nvmf_init_if2" 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:24:49.468 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:24:49.468 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:24:49.468 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:24:49.468 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:24:49.468 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 01:24:49.468 01:24:49.468 --- 10.0.0.3 ping statistics --- 01:24:49.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:49.468 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:24:49.468 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:24:49.468 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 01:24:49.468 01:24:49.468 --- 10.0.0.4 ping statistics --- 01:24:49.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:49.468 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:24:49.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:24:49.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 01:24:49.468 01:24:49.468 --- 10.0.0.1 ping statistics --- 01:24:49.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:49.468 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:24:49.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:24:49.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.033 ms 01:24:49.468 01:24:49.468 --- 10.0.0.2 ping statistics --- 01:24:49.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:49.468 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:24:49.468 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:24:49.469 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:24:49.469 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:24:49.726 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 01:24:49.726 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:24:49.726 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 01:24:49.726 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:24:49.726 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=81359 01:24:49.726 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 81359 01:24:49.726 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 81359 ']' 01:24:49.726 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:24:49.726 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 01:24:49.726 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 01:24:49.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:24:49.726 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:24:49.726 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 01:24:49.726 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:24:49.726 [2024-12-09 05:19:41.133312] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:24:49.726 [2024-12-09 05:19:41.133369] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 01:24:49.726 [2024-12-09 05:19:41.281237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:24:49.726 [2024-12-09 05:19:41.330598] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:24:49.726 [2024-12-09 05:19:41.330761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:24:49.726 [2024-12-09 05:19:41.330818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:24:49.726 [2024-12-09 05:19:41.331259] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:24:49.726 [2024-12-09 05:19:41.331682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:24:49.726 [2024-12-09 05:19:41.332406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:24:49.726 [2024-12-09 05:19:41.332786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 01:24:49.726 [2024-12-09 05:19:41.333130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:24:49.726 [2024-12-09 05:19:41.332583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:24:50.665 [2024-12-09 05:19:42.041093] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:24:50.665 Malloc0 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:24:50.665 [2024-12-09 05:19:42.077199] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:24:50.665 { 01:24:50.665 "params": { 01:24:50.665 "name": "Nvme$subsystem", 01:24:50.665 "trtype": "$TEST_TRANSPORT", 01:24:50.665 "traddr": "$NVMF_FIRST_TARGET_IP", 01:24:50.665 "adrfam": "ipv4", 01:24:50.665 "trsvcid": "$NVMF_PORT", 01:24:50.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:24:50.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:24:50.665 "hdgst": ${hdgst:-false}, 01:24:50.665 "ddgst": ${ddgst:-false} 01:24:50.665 }, 01:24:50.665 "method": "bdev_nvme_attach_controller" 01:24:50.665 } 01:24:50.665 EOF 01:24:50.665 )") 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 01:24:50.665 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:24:50.665 "params": { 01:24:50.665 "name": "Nvme1", 01:24:50.665 "trtype": "tcp", 01:24:50.665 "traddr": "10.0.0.3", 01:24:50.665 "adrfam": "ipv4", 01:24:50.665 "trsvcid": "4420", 01:24:50.665 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:24:50.665 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:24:50.665 "hdgst": false, 01:24:50.665 "ddgst": false 01:24:50.665 }, 01:24:50.665 "method": "bdev_nvme_attach_controller" 01:24:50.665 }' 01:24:50.665 [2024-12-09 05:19:42.119451] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:24:50.665 [2024-12-09 05:19:42.119506] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid81413 ] 01:24:50.665 [2024-12-09 05:19:42.263694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:24:50.923 [2024-12-09 05:19:42.313511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:24:50.923 [2024-12-09 05:19:42.313556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:24:50.923 [2024-12-09 05:19:42.313570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:24:50.923 I/O targets: 01:24:50.923 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 01:24:50.923 01:24:50.923 01:24:50.923 CUnit - A unit testing framework for C - Version 2.1-3 01:24:50.923 http://cunit.sourceforge.net/ 01:24:50.923 01:24:50.923 01:24:50.923 Suite: bdevio tests on: Nvme1n1 01:24:50.923 Test: blockdev write read block ...passed 01:24:51.181 Test: blockdev write zeroes read block ...passed 01:24:51.181 Test: blockdev write zeroes read no split ...passed 01:24:51.181 Test: blockdev write zeroes read split ...passed 01:24:51.181 Test: blockdev write zeroes read split partial ...passed 01:24:51.181 Test: blockdev reset ...[2024-12-09 05:19:42.587494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:24:51.181 [2024-12-09 05:19:42.587814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e0eb0 (9): Bad file descriptor 01:24:51.181 [2024-12-09 05:19:42.601751] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resettinpassedg controller successful. 01:24:51.181 01:24:51.181 Test: blockdev write read 8 blocks ...passed 01:24:51.181 Test: blockdev write read size > 128k ...passed 01:24:51.181 Test: blockdev write read invalid size ...passed 01:24:51.181 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:24:51.181 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:24:51.181 Test: blockdev write read max offset ...passed 01:24:51.181 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:24:51.181 Test: blockdev writev readv 8 blocks ...passed 01:24:51.181 Test: blockdev writev readv 30 x 1block ...passed 01:24:51.181 Test: blockdev writev readv block ...passed 01:24:51.181 Test: blockdev writev readv size > 128k ...passed 01:24:51.181 Test: blockdev writev readv size > 128k in two iovs ...passed 01:24:51.181 Test: blockdev comparev and writev ...[2024-12-09 05:19:42.770686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:24:51.181 [2024-12-09 05:19:42.770805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:24:51.181 [2024-12-09 05:19:42.770823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:24:51.181 [2024-12-09 05:19:42.770830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:24:51.181 [2024-12-09 05:19:42.771018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:24:51.181 [2024-12-09 05:19:42.771031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:24:51.181 [2024-12-09 05:19:42.771042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:24:51.181 [2024-12-09 05:19:42.771048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:24:51.181 [2024-12-09 05:19:42.771216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:24:51.181 [2024-12-09 05:19:42.771228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:24:51.181 [2024-12-09 05:19:42.771240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:24:51.181 [2024-12-09 05:19:42.771246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:24:51.181 [2024-12-09 05:19:42.771405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:24:51.181 [2024-12-09 05:19:42.771417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:24:51.181 [2024-12-09 05:19:42.771431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:24:51.181 [2024-12-09 05:19:42.771438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:24:51.439 passed 01:24:51.439 Test: blockdev nvme passthru rw ...passed 01:24:51.439 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:19:42.853890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:24:51.439 [2024-12-09 05:19:42.853913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:24:51.439 [2024-12-09 05:19:42.853982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:24:51.439 [2024-12-09 05:19:42.853990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:24:51.439 [2024-12-09 05:19:42.854054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:24:51.439 [2024-12-09 05:19:42.854065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:24:51.439 passed 01:24:51.439 Test: blockdev nvme admin passthru ...[2024-12-09 05:19:42.854135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:24:51.439 [2024-12-09 05:19:42.854146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:24:51.439 passed 01:24:51.439 Test: blockdev copy ...passed 01:24:51.439 01:24:51.439 Run Summary: Type Total Ran Passed Failed Inactive 01:24:51.439 suites 1 1 n/a 0 0 01:24:51.439 tests 23 23 23 0 0 01:24:51.439 asserts 152 152 152 0 n/a 01:24:51.439 01:24:51.439 Elapsed time = 0.871 seconds 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:24:51.697 rmmod nvme_tcp 01:24:51.697 rmmod nvme_fabrics 01:24:51.697 rmmod nvme_keyring 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 81359 ']' 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 81359 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 81359 ']' 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 81359 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81359 01:24:51.697 killing process with pid 81359 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81359' 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 81359 01:24:51.697 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 81359 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 01:24:52.264 01:24:52.264 real 0m3.177s 01:24:52.264 user 0m10.587s 01:24:52.264 sys 0m1.049s 01:24:52.264 ************************************ 01:24:52.264 END TEST nvmf_bdevio_no_huge 01:24:52.264 ************************************ 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:24:52.264 ************************************ 01:24:52.264 START TEST nvmf_tls 01:24:52.264 ************************************ 01:24:52.264 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 01:24:52.523 * Looking for test storage... 01:24:52.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:24:52.523 05:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:24:52.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:52.523 --rc genhtml_branch_coverage=1 01:24:52.523 --rc genhtml_function_coverage=1 01:24:52.523 --rc genhtml_legend=1 01:24:52.523 --rc geninfo_all_blocks=1 01:24:52.523 --rc geninfo_unexecuted_blocks=1 01:24:52.523 01:24:52.523 ' 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:24:52.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:52.523 --rc genhtml_branch_coverage=1 01:24:52.523 --rc genhtml_function_coverage=1 01:24:52.523 --rc genhtml_legend=1 01:24:52.523 --rc geninfo_all_blocks=1 01:24:52.523 --rc geninfo_unexecuted_blocks=1 01:24:52.523 01:24:52.523 ' 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:24:52.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:52.523 --rc genhtml_branch_coverage=1 01:24:52.523 --rc genhtml_function_coverage=1 01:24:52.523 --rc genhtml_legend=1 01:24:52.523 --rc geninfo_all_blocks=1 01:24:52.523 --rc geninfo_unexecuted_blocks=1 01:24:52.523 01:24:52.523 ' 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:24:52.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:52.523 --rc genhtml_branch_coverage=1 01:24:52.523 --rc genhtml_function_coverage=1 01:24:52.523 --rc genhtml_legend=1 01:24:52.523 --rc geninfo_all_blocks=1 01:24:52.523 --rc geninfo_unexecuted_blocks=1 01:24:52.523 01:24:52.523 ' 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:24:52.523 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:24:52.524 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:24:52.524 Cannot find device "nvmf_init_br" 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:24:52.524 Cannot find device "nvmf_init_br2" 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:24:52.524 Cannot find device "nvmf_tgt_br" 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:24:52.524 Cannot find device "nvmf_tgt_br2" 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:24:52.524 Cannot find device "nvmf_init_br" 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:24:52.524 Cannot find device "nvmf_init_br2" 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:24:52.524 Cannot find device "nvmf_tgt_br" 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:24:52.524 Cannot find device "nvmf_tgt_br2" 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:24:52.524 Cannot find device "nvmf_br" 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:24:52.524 Cannot find device "nvmf_init_if" 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:24:52.524 Cannot find device "nvmf_init_if2" 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:24:52.524 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 01:24:52.524 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:24:52.782 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:24:52.782 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 01:24:52.782 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:24:52.782 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:24:52.782 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:24:52.782 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:24:52.782 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:24:52.782 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:24:52.782 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:24:52.782 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:24:52.782 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:24:52.782 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:24:52.782 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:24:52.782 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:24:52.782 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:24:52.782 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:24:52.782 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:24:52.782 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:24:52.782 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:24:52.782 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:24:52.783 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:24:52.783 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 01:24:52.783 01:24:52.783 --- 10.0.0.3 ping statistics --- 01:24:52.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:52.783 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:24:52.783 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:24:52.783 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 01:24:52.783 01:24:52.783 --- 10.0.0.4 ping statistics --- 01:24:52.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:52.783 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:24:52.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:24:52.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 01:24:52.783 01:24:52.783 --- 10.0.0.1 ping statistics --- 01:24:52.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:52.783 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:24:52.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:24:52.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 01:24:52.783 01:24:52.783 --- 10.0.0.2 ping statistics --- 01:24:52.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:52.783 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=81647 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 81647 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 81647 ']' 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:24:52.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:24:52.783 05:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 01:24:52.783 [2024-12-09 05:19:44.360281] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:24:52.783 [2024-12-09 05:19:44.360329] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:24:53.041 [2024-12-09 05:19:44.500382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:24:53.041 [2024-12-09 05:19:44.535629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:24:53.041 [2024-12-09 05:19:44.535660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:24:53.041 [2024-12-09 05:19:44.535666] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:24:53.041 [2024-12-09 05:19:44.535671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:24:53.041 [2024-12-09 05:19:44.535675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:24:53.041 [2024-12-09 05:19:44.535942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:24:53.611 05:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:24:53.612 05:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:24:53.612 05:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:24:53.612 05:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:24:53.612 05:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:24:53.612 05:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:24:53.612 05:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 01:24:53.612 05:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 01:24:53.870 true 01:24:53.870 05:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:24:53.870 05:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 01:24:54.128 05:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 01:24:54.128 05:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 01:24:54.128 05:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 01:24:54.386 05:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:24:54.386 05:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 01:24:54.644 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 01:24:54.644 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 01:24:54.644 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 01:24:54.644 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 01:24:54.644 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:24:54.901 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 01:24:54.901 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 01:24:54.901 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:24:54.901 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 01:24:55.160 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 01:24:55.160 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 01:24:55.160 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 01:24:55.418 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:24:55.418 05:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 01:24:55.676 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 01:24:55.676 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 01:24:55.676 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 01:24:55.676 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:24:55.676 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 01:24:55.933 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 01:24:55.933 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 01:24:55.933 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 01:24:55.933 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 01:24:55.933 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 01:24:55.933 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:24:55.934 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:24:55.934 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 01:24:55.934 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 01:24:55.934 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 01:24:55.934 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 01:24:55.934 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 01:24:55.934 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 01:24:55.934 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:24:55.934 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 01:24:55.934 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 01:24:55.934 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 01:24:56.191 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 01:24:56.191 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 01:24:56.191 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.QqE336F7ZU 01:24:56.191 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 01:24:56.191 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.4HdN4MLDuM 01:24:56.191 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 01:24:56.191 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 01:24:56.191 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.QqE336F7ZU 01:24:56.191 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.4HdN4MLDuM 01:24:56.191 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 01:24:56.191 05:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 01:24:56.448 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.QqE336F7ZU 01:24:56.448 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.QqE336F7ZU 01:24:56.448 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:24:56.706 [2024-12-09 05:19:48.219527] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:24:56.706 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:24:56.982 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 01:24:57.301 [2024-12-09 05:19:48.631592] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:24:57.301 [2024-12-09 05:19:48.631726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:24:57.301 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:24:57.301 malloc0 01:24:57.302 05:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:24:57.571 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.QqE336F7ZU 01:24:57.829 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 01:24:58.086 05:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.QqE336F7ZU 01:25:08.054 Initializing NVMe Controllers 01:25:08.054 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:25:08.054 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:25:08.054 Initialization complete. Launching workers. 01:25:08.054 ======================================================== 01:25:08.054 Latency(us) 01:25:08.054 Device Information : IOPS MiB/s Average min max 01:25:08.054 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18299.50 71.48 3497.63 1038.63 12729.03 01:25:08.054 ======================================================== 01:25:08.054 Total : 18299.50 71.48 3497.63 1038.63 12729.03 01:25:08.054 01:25:08.312 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QqE336F7ZU 01:25:08.312 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:25:08.312 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:25:08.312 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:25:08.312 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QqE336F7ZU 01:25:08.312 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:25:08.312 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82015 01:25:08.312 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:25:08.312 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:25:08.312 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82015 /var/tmp/bdevperf.sock 01:25:08.312 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82015 ']' 01:25:08.312 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:25:08.312 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:08.312 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:25:08.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:25:08.312 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:08.312 05:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:25:08.312 [2024-12-09 05:19:59.727590] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:08.312 [2024-12-09 05:19:59.727644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82015 ] 01:25:08.312 [2024-12-09 05:19:59.868407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:08.312 [2024-12-09 05:19:59.903690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:25:09.246 05:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:09.246 05:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:25:09.246 05:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QqE336F7ZU 01:25:09.246 05:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:25:09.504 [2024-12-09 05:20:00.984668] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:25:09.504 TLSTESTn1 01:25:09.504 05:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 01:25:09.764 Running I/O for 10 seconds... 01:25:11.629 7288.00 IOPS, 28.47 MiB/s [2024-12-09T05:20:04.183Z] 7329.50 IOPS, 28.63 MiB/s [2024-12-09T05:20:05.558Z] 7345.33 IOPS, 28.69 MiB/s [2024-12-09T05:20:06.490Z] 7349.00 IOPS, 28.71 MiB/s [2024-12-09T05:20:07.422Z] 7357.60 IOPS, 28.74 MiB/s [2024-12-09T05:20:08.351Z] 7360.83 IOPS, 28.75 MiB/s [2024-12-09T05:20:09.283Z] 7365.29 IOPS, 28.77 MiB/s [2024-12-09T05:20:10.215Z] 7366.25 IOPS, 28.77 MiB/s [2024-12-09T05:20:11.222Z] 7363.89 IOPS, 28.77 MiB/s [2024-12-09T05:20:11.222Z] 7365.80 IOPS, 28.77 MiB/s 01:25:19.605 Latency(us) 01:25:19.605 [2024-12-09T05:20:11.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:25:19.605 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:25:19.605 Verification LBA range: start 0x0 length 0x2000 01:25:19.605 TLSTESTn1 : 10.01 7371.58 28.80 0.00 0.00 17335.90 3352.42 16131.94 01:25:19.605 [2024-12-09T05:20:11.222Z] =================================================================================================================== 01:25:19.605 [2024-12-09T05:20:11.222Z] Total : 7371.58 28.80 0.00 0.00 17335.90 3352.42 16131.94 01:25:19.605 { 01:25:19.605 "results": [ 01:25:19.605 { 01:25:19.605 "job": "TLSTESTn1", 01:25:19.605 "core_mask": "0x4", 01:25:19.605 "workload": "verify", 01:25:19.605 "status": "finished", 01:25:19.605 "verify_range": { 01:25:19.605 "start": 0, 01:25:19.605 "length": 8192 01:25:19.605 }, 01:25:19.605 "queue_depth": 128, 01:25:19.605 "io_size": 4096, 01:25:19.605 "runtime": 10.009382, 01:25:19.605 "iops": 7371.583979910049, 01:25:19.605 "mibps": 28.795249921523627, 01:25:19.605 "io_failed": 0, 01:25:19.605 "io_timeout": 0, 01:25:19.605 "avg_latency_us": 17335.899972164447, 01:25:19.605 "min_latency_us": 3352.4184615384615, 01:25:19.605 "max_latency_us": 16131.938461538462 01:25:19.605 } 01:25:19.605 ], 01:25:19.605 "core_count": 1 01:25:19.605 } 01:25:19.605 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:25:19.605 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 82015 01:25:19.605 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82015 ']' 01:25:19.605 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82015 01:25:19.605 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:25:19.605 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:19.605 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82015 01:25:19.605 killing process with pid 82015 01:25:19.605 Received shutdown signal, test time was about 10.000000 seconds 01:25:19.605 01:25:19.605 Latency(us) 01:25:19.605 [2024-12-09T05:20:11.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:25:19.605 [2024-12-09T05:20:11.222Z] =================================================================================================================== 01:25:19.605 [2024-12-09T05:20:11.222Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:25:19.605 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:25:19.605 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:25:19.605 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82015' 01:25:19.605 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82015 01:25:19.605 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82015 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4HdN4MLDuM 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4HdN4MLDuM 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 01:25:19.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4HdN4MLDuM 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.4HdN4MLDuM 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82173 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82173 /var/tmp/bdevperf.sock 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82173 ']' 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:19.863 05:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:25:19.863 [2024-12-09 05:20:11.379679] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:19.863 [2024-12-09 05:20:11.379731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82173 ] 01:25:20.120 [2024-12-09 05:20:11.510440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:20.120 [2024-12-09 05:20:11.540627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:25:20.685 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:20.685 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:25:20.685 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4HdN4MLDuM 01:25:20.942 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:25:21.200 [2024-12-09 05:20:12.650032] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:25:21.200 [2024-12-09 05:20:12.659223] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:25:21.200 [2024-12-09 05:20:12.659772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7eb00 (107): Transport endpoint is not connected 01:25:21.200 [2024-12-09 05:20:12.660763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7eb00 (9): Bad file descriptor 01:25:21.200 [2024-12-09 05:20:12.661762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 01:25:21.200 [2024-12-09 05:20:12.661824] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 01:25:21.200 [2024-12-09 05:20:12.661863] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 01:25:21.200 [2024-12-09 05:20:12.661890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 01:25:21.200 2024/12/09 05:20:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:25:21.200 request: 01:25:21.200 { 01:25:21.200 "method": "bdev_nvme_attach_controller", 01:25:21.200 "params": { 01:25:21.200 "name": "TLSTEST", 01:25:21.200 "trtype": "tcp", 01:25:21.200 "traddr": "10.0.0.3", 01:25:21.200 "adrfam": "ipv4", 01:25:21.200 "trsvcid": "4420", 01:25:21.200 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:25:21.200 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:25:21.200 "prchk_reftag": false, 01:25:21.200 "prchk_guard": false, 01:25:21.200 "hdgst": false, 01:25:21.200 "ddgst": false, 01:25:21.200 "psk": "key0", 01:25:21.200 "allow_unrecognized_csi": false 01:25:21.200 } 01:25:21.200 } 01:25:21.200 Got JSON-RPC error response 01:25:21.200 GoRPCClient: error on JSON-RPC call 01:25:21.200 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 82173 01:25:21.200 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82173 ']' 01:25:21.200 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82173 01:25:21.200 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:25:21.200 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:21.200 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82173 01:25:21.200 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:25:21.200 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:25:21.200 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82173' 01:25:21.200 killing process with pid 82173 01:25:21.200 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82173 01:25:21.200 Received shutdown signal, test time was about 10.000000 seconds 01:25:21.200 01:25:21.200 Latency(us) 01:25:21.200 [2024-12-09T05:20:12.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:25:21.200 [2024-12-09T05:20:12.817Z] =================================================================================================================== 01:25:21.200 [2024-12-09T05:20:12.817Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:25:21.200 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82173 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QqE336F7ZU 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QqE336F7ZU 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QqE336F7ZU 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QqE336F7ZU 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82220 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82220 /var/tmp/bdevperf.sock 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82220 ']' 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:25:21.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:25:21.458 05:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:25:21.458 [2024-12-09 05:20:12.876170] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:21.458 [2024-12-09 05:20:12.876231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82220 ] 01:25:21.458 [2024-12-09 05:20:13.012618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:21.458 [2024-12-09 05:20:13.043103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:25:22.390 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:22.390 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:25:22.390 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QqE336F7ZU 01:25:22.390 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 01:25:22.648 [2024-12-09 05:20:14.136531] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:25:22.648 [2024-12-09 05:20:14.140445] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 01:25:22.648 [2024-12-09 05:20:14.140470] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 01:25:22.648 [2024-12-09 05:20:14.140501] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:25:22.648 [2024-12-09 05:20:14.141276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf17b00 (107): Transport endpoint is not connected 01:25:22.648 [2024-12-09 05:20:14.142265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf17b00 (9): Bad file descriptor 01:25:22.648 [2024-12-09 05:20:14.143263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 01:25:22.648 [2024-12-09 05:20:14.143325] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 01:25:22.648 [2024-12-09 05:20:14.143365] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 01:25:22.648 [2024-12-09 05:20:14.143392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 01:25:22.648 2024/12/09 05:20:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:25:22.648 request: 01:25:22.648 { 01:25:22.648 "method": "bdev_nvme_attach_controller", 01:25:22.648 "params": { 01:25:22.648 "name": "TLSTEST", 01:25:22.648 "trtype": "tcp", 01:25:22.648 "traddr": "10.0.0.3", 01:25:22.648 "adrfam": "ipv4", 01:25:22.648 "trsvcid": "4420", 01:25:22.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:25:22.648 "hostnqn": "nqn.2016-06.io.spdk:host2", 01:25:22.648 "prchk_reftag": false, 01:25:22.648 "prchk_guard": false, 01:25:22.648 "hdgst": false, 01:25:22.648 "ddgst": false, 01:25:22.648 "psk": "key0", 01:25:22.648 "allow_unrecognized_csi": false 01:25:22.648 } 01:25:22.648 } 01:25:22.648 Got JSON-RPC error response 01:25:22.648 GoRPCClient: error on JSON-RPC call 01:25:22.648 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 82220 01:25:22.648 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82220 ']' 01:25:22.648 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82220 01:25:22.648 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:25:22.648 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:22.648 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82220 01:25:22.648 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:25:22.648 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:25:22.648 killing process with pid 82220 01:25:22.648 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82220' 01:25:22.648 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82220 01:25:22.648 Received shutdown signal, test time was about 10.000000 seconds 01:25:22.648 01:25:22.648 Latency(us) 01:25:22.648 [2024-12-09T05:20:14.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:25:22.648 [2024-12-09T05:20:14.265Z] =================================================================================================================== 01:25:22.648 [2024-12-09T05:20:14.265Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:25:22.648 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82220 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QqE336F7ZU 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QqE336F7ZU 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QqE336F7ZU 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QqE336F7ZU 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82273 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82273 /var/tmp/bdevperf.sock 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82273 ']' 01:25:22.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:22.907 05:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:25:22.907 [2024-12-09 05:20:14.343887] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:22.907 [2024-12-09 05:20:14.343949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82273 ] 01:25:22.907 [2024-12-09 05:20:14.483673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:22.907 [2024-12-09 05:20:14.518268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:25:23.840 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:23.840 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:25:23.840 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QqE336F7ZU 01:25:23.840 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 01:25:24.097 [2024-12-09 05:20:15.606493] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:25:24.097 [2024-12-09 05:20:15.610404] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 01:25:24.097 [2024-12-09 05:20:15.610430] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 01:25:24.097 [2024-12-09 05:20:15.610461] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:25:24.097 [2024-12-09 05:20:15.611240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1949b00 (107): Transport endpoint is not connected 01:25:24.097 [2024-12-09 05:20:15.612226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1949b00 (9): Bad file descriptor 01:25:24.097 [2024-12-09 05:20:15.613224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 01:25:24.097 [2024-12-09 05:20:15.613285] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 01:25:24.097 [2024-12-09 05:20:15.613320] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 01:25:24.097 [2024-12-09 05:20:15.613347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 01:25:24.097 2024/12/09 05:20:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:25:24.097 request: 01:25:24.097 { 01:25:24.097 "method": "bdev_nvme_attach_controller", 01:25:24.097 "params": { 01:25:24.097 "name": "TLSTEST", 01:25:24.097 "trtype": "tcp", 01:25:24.097 "traddr": "10.0.0.3", 01:25:24.097 "adrfam": "ipv4", 01:25:24.097 "trsvcid": "4420", 01:25:24.097 "subnqn": "nqn.2016-06.io.spdk:cnode2", 01:25:24.097 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:25:24.097 "prchk_reftag": false, 01:25:24.097 "prchk_guard": false, 01:25:24.097 "hdgst": false, 01:25:24.097 "ddgst": false, 01:25:24.097 "psk": "key0", 01:25:24.097 "allow_unrecognized_csi": false 01:25:24.097 } 01:25:24.097 } 01:25:24.097 Got JSON-RPC error response 01:25:24.097 GoRPCClient: error on JSON-RPC call 01:25:24.097 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 82273 01:25:24.097 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82273 ']' 01:25:24.097 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82273 01:25:24.097 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:25:24.098 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:24.098 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82273 01:25:24.098 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:25:24.098 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:25:24.098 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82273' 01:25:24.098 killing process with pid 82273 01:25:24.098 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82273 01:25:24.098 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82273 01:25:24.098 Received shutdown signal, test time was about 10.000000 seconds 01:25:24.098 01:25:24.098 Latency(us) 01:25:24.098 [2024-12-09T05:20:15.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:25:24.098 [2024-12-09T05:20:15.715Z] =================================================================================================================== 01:25:24.098 [2024-12-09T05:20:15.715Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82325 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82325 /var/tmp/bdevperf.sock 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82325 ']' 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:25:24.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:24.375 05:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:25:24.375 [2024-12-09 05:20:15.821318] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:24.375 [2024-12-09 05:20:15.821379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82325 ] 01:25:24.375 [2024-12-09 05:20:15.957288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:24.375 [2024-12-09 05:20:15.987085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:25:25.308 05:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:25.308 05:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:25:25.308 05:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 01:25:25.308 [2024-12-09 05:20:16.879450] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 01:25:25.308 [2024-12-09 05:20:16.879533] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:25:25.308 2024/12/09 05:20:16 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 01:25:25.308 request: 01:25:25.308 { 01:25:25.308 "method": "keyring_file_add_key", 01:25:25.308 "params": { 01:25:25.308 "name": "key0", 01:25:25.308 "path": "" 01:25:25.308 } 01:25:25.308 } 01:25:25.308 Got JSON-RPC error response 01:25:25.308 GoRPCClient: error on JSON-RPC call 01:25:25.308 05:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:25:25.566 [2024-12-09 05:20:17.087568] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:25:25.566 [2024-12-09 05:20:17.087669] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 01:25:25.566 2024/12/09 05:20:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 01:25:25.566 request: 01:25:25.566 { 01:25:25.566 "method": "bdev_nvme_attach_controller", 01:25:25.566 "params": { 01:25:25.566 "name": "TLSTEST", 01:25:25.566 "trtype": "tcp", 01:25:25.566 "traddr": "10.0.0.3", 01:25:25.566 "adrfam": "ipv4", 01:25:25.566 "trsvcid": "4420", 01:25:25.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:25:25.566 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:25:25.566 "prchk_reftag": false, 01:25:25.566 "prchk_guard": false, 01:25:25.566 "hdgst": false, 01:25:25.566 "ddgst": false, 01:25:25.566 "psk": "key0", 01:25:25.566 "allow_unrecognized_csi": false 01:25:25.566 } 01:25:25.566 } 01:25:25.566 Got JSON-RPC error response 01:25:25.566 GoRPCClient: error on JSON-RPC call 01:25:25.566 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 82325 01:25:25.566 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82325 ']' 01:25:25.566 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82325 01:25:25.566 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:25:25.566 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:25.566 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82325 01:25:25.566 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:25:25.566 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:25:25.566 killing process with pid 82325 01:25:25.566 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82325' 01:25:25.566 Received shutdown signal, test time was about 10.000000 seconds 01:25:25.566 01:25:25.566 Latency(us) 01:25:25.566 [2024-12-09T05:20:17.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:25:25.566 [2024-12-09T05:20:17.183Z] =================================================================================================================== 01:25:25.566 [2024-12-09T05:20:17.183Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:25:25.566 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82325 01:25:25.566 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82325 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 81647 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 81647 ']' 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 81647 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81647 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81647' 01:25:25.824 killing process with pid 81647 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 81647 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 81647 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 01:25:25.824 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 01:25:26.081 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 01:25:26.081 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 01:25:26.081 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.oT7ReP1J2E 01:25:26.081 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 01:25:26.081 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.oT7ReP1J2E 01:25:26.081 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 01:25:26.081 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:25:26.081 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:25:26.081 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:25:26.081 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=82388 01:25:26.081 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 82388 01:25:26.081 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82388 ']' 01:25:26.081 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:26.081 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:26.081 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:26.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:26.081 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:26.081 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:25:26.081 05:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:25:26.081 [2024-12-09 05:20:17.502144] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:26.081 [2024-12-09 05:20:17.502196] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:25:26.081 [2024-12-09 05:20:17.630248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:26.081 [2024-12-09 05:20:17.659488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:25:26.081 [2024-12-09 05:20:17.659522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:25:26.081 [2024-12-09 05:20:17.659528] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:25:26.081 [2024-12-09 05:20:17.659532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:25:26.081 [2024-12-09 05:20:17.659535] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:25:26.081 [2024-12-09 05:20:17.659742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:25:27.020 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:27.020 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:25:27.020 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:25:27.020 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:25:27.020 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:25:27.020 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:25:27.020 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.oT7ReP1J2E 01:25:27.020 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.oT7ReP1J2E 01:25:27.020 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:25:27.020 [2024-12-09 05:20:18.589947] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:25:27.020 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:25:27.278 05:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 01:25:27.536 [2024-12-09 05:20:18.994005] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:25:27.536 [2024-12-09 05:20:18.994137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:25:27.536 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:25:27.802 malloc0 01:25:27.802 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:25:28.059 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.oT7ReP1J2E 01:25:28.060 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 01:25:28.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:25:28.318 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oT7ReP1J2E 01:25:28.318 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:25:28.318 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:25:28.318 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:25:28.318 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.oT7ReP1J2E 01:25:28.318 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:25:28.318 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82492 01:25:28.318 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:25:28.318 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82492 /var/tmp/bdevperf.sock 01:25:28.318 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82492 ']' 01:25:28.318 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:25:28.318 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:25:28.318 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:28.318 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:25:28.318 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:28.318 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:25:28.318 [2024-12-09 05:20:19.862945] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:28.318 [2024-12-09 05:20:19.863114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82492 ] 01:25:28.576 [2024-12-09 05:20:20.005732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:28.576 [2024-12-09 05:20:20.043869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:25:28.576 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:28.576 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:25:28.576 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oT7ReP1J2E 01:25:28.833 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:25:29.090 [2024-12-09 05:20:20.510372] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:25:29.090 TLSTESTn1 01:25:29.090 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 01:25:29.090 Running I/O for 10 seconds... 01:25:31.394 6428.00 IOPS, 25.11 MiB/s [2024-12-09T05:20:23.941Z] 6813.50 IOPS, 26.62 MiB/s [2024-12-09T05:20:24.871Z] 7000.33 IOPS, 27.35 MiB/s [2024-12-09T05:20:25.801Z] 7098.50 IOPS, 27.73 MiB/s [2024-12-09T05:20:26.732Z] 7156.80 IOPS, 27.96 MiB/s [2024-12-09T05:20:28.102Z] 7195.17 IOPS, 28.11 MiB/s [2024-12-09T05:20:28.737Z] 7224.14 IOPS, 28.22 MiB/s [2024-12-09T05:20:30.105Z] 7245.50 IOPS, 28.30 MiB/s [2024-12-09T05:20:31.037Z] 7259.44 IOPS, 28.36 MiB/s [2024-12-09T05:20:31.037Z] 7269.10 IOPS, 28.39 MiB/s 01:25:39.420 Latency(us) 01:25:39.420 [2024-12-09T05:20:31.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:25:39.420 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:25:39.420 Verification LBA range: start 0x0 length 0x2000 01:25:39.420 TLSTESTn1 : 10.01 7275.12 28.42 0.00 0.00 17565.70 3188.58 16031.11 01:25:39.420 [2024-12-09T05:20:31.037Z] =================================================================================================================== 01:25:39.420 [2024-12-09T05:20:31.037Z] Total : 7275.12 28.42 0.00 0.00 17565.70 3188.58 16031.11 01:25:39.420 { 01:25:39.420 "results": [ 01:25:39.420 { 01:25:39.420 "job": "TLSTESTn1", 01:25:39.420 "core_mask": "0x4", 01:25:39.420 "workload": "verify", 01:25:39.420 "status": "finished", 01:25:39.420 "verify_range": { 01:25:39.420 "start": 0, 01:25:39.420 "length": 8192 01:25:39.420 }, 01:25:39.420 "queue_depth": 128, 01:25:39.420 "io_size": 4096, 01:25:39.420 "runtime": 10.00905, 01:25:39.420 "iops": 7275.1160200018985, 01:25:39.420 "mibps": 28.418421953132416, 01:25:39.420 "io_failed": 0, 01:25:39.420 "io_timeout": 0, 01:25:39.420 "avg_latency_us": 17565.695660332913, 01:25:39.420 "min_latency_us": 3188.5784615384614, 01:25:39.420 "max_latency_us": 16031.113846153847 01:25:39.420 } 01:25:39.420 ], 01:25:39.420 "core_count": 1 01:25:39.420 } 01:25:39.420 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 82492 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82492 ']' 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82492 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82492 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:25:39.421 killing process with pid 82492 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82492' 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82492 01:25:39.421 Received shutdown signal, test time was about 10.000000 seconds 01:25:39.421 01:25:39.421 Latency(us) 01:25:39.421 [2024-12-09T05:20:31.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:25:39.421 [2024-12-09T05:20:31.038Z] =================================================================================================================== 01:25:39.421 [2024-12-09T05:20:31.038Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82492 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.oT7ReP1J2E 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oT7ReP1J2E 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oT7ReP1J2E 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oT7ReP1J2E 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.oT7ReP1J2E 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82642 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82642 /var/tmp/bdevperf.sock 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82642 ']' 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:39.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:39.421 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:25:39.421 [2024-12-09 05:20:30.905582] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:39.421 [2024-12-09 05:20:30.905631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82642 ] 01:25:39.678 [2024-12-09 05:20:31.037582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:39.678 [2024-12-09 05:20:31.067898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:25:40.242 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:40.242 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:25:40.242 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oT7ReP1J2E 01:25:40.499 [2024-12-09 05:20:31.969158] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.oT7ReP1J2E': 0100666 01:25:40.499 [2024-12-09 05:20:31.969179] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:25:40.499 2024/12/09 05:20:31 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.oT7ReP1J2E], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 01:25:40.499 request: 01:25:40.499 { 01:25:40.499 "method": "keyring_file_add_key", 01:25:40.499 "params": { 01:25:40.499 "name": "key0", 01:25:40.499 "path": "/tmp/tmp.oT7ReP1J2E" 01:25:40.499 } 01:25:40.499 } 01:25:40.499 Got JSON-RPC error response 01:25:40.499 GoRPCClient: error on JSON-RPC call 01:25:40.499 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:25:40.756 [2024-12-09 05:20:32.169261] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:25:40.756 [2024-12-09 05:20:32.169289] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 01:25:40.756 2024/12/09 05:20:32 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 01:25:40.756 request: 01:25:40.756 { 01:25:40.756 "method": "bdev_nvme_attach_controller", 01:25:40.756 "params": { 01:25:40.756 "name": "TLSTEST", 01:25:40.756 "trtype": "tcp", 01:25:40.756 "traddr": "10.0.0.3", 01:25:40.756 "adrfam": "ipv4", 01:25:40.756 "trsvcid": "4420", 01:25:40.756 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:25:40.756 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:25:40.756 "prchk_reftag": false, 01:25:40.756 "prchk_guard": false, 01:25:40.756 "hdgst": false, 01:25:40.756 "ddgst": false, 01:25:40.756 "psk": "key0", 01:25:40.756 "allow_unrecognized_csi": false 01:25:40.756 } 01:25:40.756 } 01:25:40.756 Got JSON-RPC error response 01:25:40.756 GoRPCClient: error on JSON-RPC call 01:25:40.756 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 82642 01:25:40.756 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82642 ']' 01:25:40.756 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82642 01:25:40.756 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:25:40.756 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:40.756 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82642 01:25:40.756 killing process with pid 82642 01:25:40.756 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:25:40.756 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:25:40.756 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82642' 01:25:40.757 Received shutdown signal, test time was about 10.000000 seconds 01:25:40.757 01:25:40.757 Latency(us) 01:25:40.757 [2024-12-09T05:20:32.374Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:25:40.757 [2024-12-09T05:20:32.374Z] =================================================================================================================== 01:25:40.757 [2024-12-09T05:20:32.374Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:25:40.757 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82642 01:25:40.757 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82642 01:25:40.757 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 01:25:40.757 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:25:40.757 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:25:40.757 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:25:40.757 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:25:40.757 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 82388 01:25:40.757 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82388 ']' 01:25:40.757 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82388 01:25:40.757 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:25:40.757 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:40.757 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82388 01:25:40.757 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:25:40.757 killing process with pid 82388 01:25:40.757 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:25:40.757 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82388' 01:25:40.757 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82388 01:25:40.757 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82388 01:25:41.014 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 01:25:41.014 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:25:41.014 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:25:41.014 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:25:41.014 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=82700 01:25:41.014 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 82700 01:25:41.014 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82700 ']' 01:25:41.014 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:25:41.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:41.014 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:41.014 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:41.014 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:41.014 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:41.014 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:25:41.014 [2024-12-09 05:20:32.538903] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:41.014 [2024-12-09 05:20:32.538967] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:25:41.270 [2024-12-09 05:20:32.668580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:41.270 [2024-12-09 05:20:32.697715] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:25:41.270 [2024-12-09 05:20:32.697749] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:25:41.270 [2024-12-09 05:20:32.697755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:25:41.270 [2024-12-09 05:20:32.697759] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:25:41.270 [2024-12-09 05:20:32.697763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:25:41.270 [2024-12-09 05:20:32.697964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:25:41.835 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:41.835 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:25:41.835 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:25:41.835 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:25:41.835 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:25:41.835 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:25:41.835 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.oT7ReP1J2E 01:25:41.835 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:25:41.835 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.oT7ReP1J2E 01:25:41.835 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 01:25:41.835 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:41.835 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 01:25:41.835 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:41.835 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.oT7ReP1J2E 01:25:41.835 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.oT7ReP1J2E 01:25:41.835 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:25:42.093 [2024-12-09 05:20:33.587390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:25:42.093 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:25:42.351 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 01:25:42.608 [2024-12-09 05:20:34.003451] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:25:42.608 [2024-12-09 05:20:34.003590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:25:42.608 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:25:42.608 malloc0 01:25:42.866 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:25:42.866 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.oT7ReP1J2E 01:25:43.124 [2024-12-09 05:20:34.613329] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.oT7ReP1J2E': 0100666 01:25:43.124 [2024-12-09 05:20:34.613356] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:25:43.124 2024/12/09 05:20:34 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.oT7ReP1J2E], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 01:25:43.124 request: 01:25:43.124 { 01:25:43.124 "method": "keyring_file_add_key", 01:25:43.124 "params": { 01:25:43.124 "name": "key0", 01:25:43.124 "path": "/tmp/tmp.oT7ReP1J2E" 01:25:43.124 } 01:25:43.124 } 01:25:43.124 Got JSON-RPC error response 01:25:43.124 GoRPCClient: error on JSON-RPC call 01:25:43.124 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 01:25:43.382 [2024-12-09 05:20:34.821376] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 01:25:43.382 [2024-12-09 05:20:34.821411] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 01:25:43.382 2024/12/09 05:20:34 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 01:25:43.382 request: 01:25:43.382 { 01:25:43.382 "method": "nvmf_subsystem_add_host", 01:25:43.382 "params": { 01:25:43.382 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:25:43.382 "host": "nqn.2016-06.io.spdk:host1", 01:25:43.382 "psk": "key0" 01:25:43.382 } 01:25:43.382 } 01:25:43.382 Got JSON-RPC error response 01:25:43.382 GoRPCClient: error on JSON-RPC call 01:25:43.382 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:25:43.382 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:25:43.382 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:25:43.382 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:25:43.382 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 82700 01:25:43.382 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82700 ']' 01:25:43.382 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82700 01:25:43.382 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:25:43.382 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:43.382 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82700 01:25:43.382 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:25:43.382 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:25:43.382 killing process with pid 82700 01:25:43.382 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82700' 01:25:43.382 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82700 01:25:43.382 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82700 01:25:43.647 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.oT7ReP1J2E 01:25:43.647 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 01:25:43.647 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:25:43.647 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:25:43.647 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:25:43.647 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=82814 01:25:43.647 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 82814 01:25:43.647 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82814 ']' 01:25:43.647 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:43.647 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:25:43.647 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:43.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:43.647 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:43.647 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:43.647 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:25:43.647 [2024-12-09 05:20:35.067582] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:43.647 [2024-12-09 05:20:35.067645] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:25:43.647 [2024-12-09 05:20:35.202349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:43.647 [2024-12-09 05:20:35.232059] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:25:43.647 [2024-12-09 05:20:35.232092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:25:43.647 [2024-12-09 05:20:35.232098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:25:43.647 [2024-12-09 05:20:35.232103] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:25:43.647 [2024-12-09 05:20:35.232107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:25:43.647 [2024-12-09 05:20:35.232367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:25:44.580 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:44.580 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:25:44.580 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:25:44.580 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:25:44.580 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:25:44.580 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:25:44.580 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.oT7ReP1J2E 01:25:44.580 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.oT7ReP1J2E 01:25:44.580 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:25:44.580 [2024-12-09 05:20:36.145858] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:25:44.580 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:25:44.839 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 01:25:45.096 [2024-12-09 05:20:36.557919] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:25:45.096 [2024-12-09 05:20:36.558049] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:25:45.096 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:25:45.354 malloc0 01:25:45.354 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:25:45.612 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.oT7ReP1J2E 01:25:45.612 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 01:25:45.870 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=82929 01:25:45.870 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:25:45.870 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:25:45.870 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 82929 /var/tmp/bdevperf.sock 01:25:45.870 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82929 ']' 01:25:45.870 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:25:45.870 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:45.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:25:45.870 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:25:45.870 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:45.870 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:25:45.870 [2024-12-09 05:20:37.454773] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:45.870 [2024-12-09 05:20:37.454837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82929 ] 01:25:46.128 [2024-12-09 05:20:37.585894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:46.128 [2024-12-09 05:20:37.621592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:25:47.062 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:47.062 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:25:47.062 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oT7ReP1J2E 01:25:47.062 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:25:47.319 [2024-12-09 05:20:38.726823] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:25:47.319 TLSTESTn1 01:25:47.319 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 01:25:47.577 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 01:25:47.577 "subsystems": [ 01:25:47.577 { 01:25:47.577 "subsystem": "keyring", 01:25:47.577 "config": [ 01:25:47.577 { 01:25:47.577 "method": "keyring_file_add_key", 01:25:47.577 "params": { 01:25:47.577 "name": "key0", 01:25:47.577 "path": "/tmp/tmp.oT7ReP1J2E" 01:25:47.577 } 01:25:47.577 } 01:25:47.577 ] 01:25:47.577 }, 01:25:47.577 { 01:25:47.577 "subsystem": "iobuf", 01:25:47.577 "config": [ 01:25:47.577 { 01:25:47.577 "method": "iobuf_set_options", 01:25:47.577 "params": { 01:25:47.577 "enable_numa": false, 01:25:47.577 "large_bufsize": 135168, 01:25:47.577 "large_pool_count": 1024, 01:25:47.577 "small_bufsize": 8192, 01:25:47.577 "small_pool_count": 8192 01:25:47.577 } 01:25:47.577 } 01:25:47.577 ] 01:25:47.577 }, 01:25:47.577 { 01:25:47.577 "subsystem": "sock", 01:25:47.577 "config": [ 01:25:47.577 { 01:25:47.577 "method": "sock_set_default_impl", 01:25:47.577 "params": { 01:25:47.577 "impl_name": "posix" 01:25:47.577 } 01:25:47.577 }, 01:25:47.577 { 01:25:47.577 "method": "sock_impl_set_options", 01:25:47.577 "params": { 01:25:47.577 "enable_ktls": false, 01:25:47.577 "enable_placement_id": 0, 01:25:47.577 "enable_quickack": false, 01:25:47.577 "enable_recv_pipe": true, 01:25:47.577 "enable_zerocopy_send_client": false, 01:25:47.577 "enable_zerocopy_send_server": true, 01:25:47.577 "impl_name": "ssl", 01:25:47.577 "recv_buf_size": 4096, 01:25:47.577 "send_buf_size": 4096, 01:25:47.577 "tls_version": 0, 01:25:47.577 "zerocopy_threshold": 0 01:25:47.577 } 01:25:47.577 }, 01:25:47.577 { 01:25:47.577 "method": "sock_impl_set_options", 01:25:47.577 "params": { 01:25:47.577 "enable_ktls": false, 01:25:47.577 "enable_placement_id": 0, 01:25:47.577 "enable_quickack": false, 01:25:47.577 "enable_recv_pipe": true, 01:25:47.577 "enable_zerocopy_send_client": false, 01:25:47.577 "enable_zerocopy_send_server": true, 01:25:47.577 "impl_name": "posix", 01:25:47.577 "recv_buf_size": 2097152, 01:25:47.577 "send_buf_size": 2097152, 01:25:47.577 "tls_version": 0, 01:25:47.577 "zerocopy_threshold": 0 01:25:47.577 } 01:25:47.577 } 01:25:47.577 ] 01:25:47.577 }, 01:25:47.577 { 01:25:47.577 "subsystem": "vmd", 01:25:47.577 "config": [] 01:25:47.577 }, 01:25:47.577 { 01:25:47.577 "subsystem": "accel", 01:25:47.577 "config": [ 01:25:47.577 { 01:25:47.577 "method": "accel_set_options", 01:25:47.577 "params": { 01:25:47.577 "buf_count": 2048, 01:25:47.577 "large_cache_size": 16, 01:25:47.577 "sequence_count": 2048, 01:25:47.577 "small_cache_size": 128, 01:25:47.577 "task_count": 2048 01:25:47.577 } 01:25:47.577 } 01:25:47.577 ] 01:25:47.577 }, 01:25:47.577 { 01:25:47.577 "subsystem": "bdev", 01:25:47.577 "config": [ 01:25:47.577 { 01:25:47.577 "method": "bdev_set_options", 01:25:47.577 "params": { 01:25:47.577 "bdev_auto_examine": true, 01:25:47.577 "bdev_io_cache_size": 256, 01:25:47.577 "bdev_io_pool_size": 65535, 01:25:47.577 "iobuf_large_cache_size": 16, 01:25:47.577 "iobuf_small_cache_size": 128 01:25:47.577 } 01:25:47.577 }, 01:25:47.577 { 01:25:47.577 "method": "bdev_raid_set_options", 01:25:47.577 "params": { 01:25:47.577 "process_max_bandwidth_mb_sec": 0, 01:25:47.577 "process_window_size_kb": 1024 01:25:47.577 } 01:25:47.577 }, 01:25:47.577 { 01:25:47.577 "method": "bdev_iscsi_set_options", 01:25:47.577 "params": { 01:25:47.577 "timeout_sec": 30 01:25:47.577 } 01:25:47.577 }, 01:25:47.577 { 01:25:47.577 "method": "bdev_nvme_set_options", 01:25:47.577 "params": { 01:25:47.577 "action_on_timeout": "none", 01:25:47.577 "allow_accel_sequence": false, 01:25:47.577 "arbitration_burst": 0, 01:25:47.577 "bdev_retry_count": 3, 01:25:47.577 "ctrlr_loss_timeout_sec": 0, 01:25:47.577 "delay_cmd_submit": true, 01:25:47.577 "dhchap_dhgroups": [ 01:25:47.577 "null", 01:25:47.577 "ffdhe2048", 01:25:47.578 "ffdhe3072", 01:25:47.578 "ffdhe4096", 01:25:47.578 "ffdhe6144", 01:25:47.578 "ffdhe8192" 01:25:47.578 ], 01:25:47.578 "dhchap_digests": [ 01:25:47.578 "sha256", 01:25:47.578 "sha384", 01:25:47.578 "sha512" 01:25:47.578 ], 01:25:47.578 "disable_auto_failback": false, 01:25:47.578 "fast_io_fail_timeout_sec": 0, 01:25:47.578 "generate_uuids": false, 01:25:47.578 "high_priority_weight": 0, 01:25:47.578 "io_path_stat": false, 01:25:47.578 "io_queue_requests": 0, 01:25:47.578 "keep_alive_timeout_ms": 10000, 01:25:47.578 "low_priority_weight": 0, 01:25:47.578 "medium_priority_weight": 0, 01:25:47.578 "nvme_adminq_poll_period_us": 10000, 01:25:47.578 "nvme_error_stat": false, 01:25:47.578 "nvme_ioq_poll_period_us": 0, 01:25:47.578 "rdma_cm_event_timeout_ms": 0, 01:25:47.578 "rdma_max_cq_size": 0, 01:25:47.578 "rdma_srq_size": 0, 01:25:47.578 "reconnect_delay_sec": 0, 01:25:47.578 "timeout_admin_us": 0, 01:25:47.578 "timeout_us": 0, 01:25:47.578 "transport_ack_timeout": 0, 01:25:47.578 "transport_retry_count": 4, 01:25:47.578 "transport_tos": 0 01:25:47.578 } 01:25:47.578 }, 01:25:47.578 { 01:25:47.578 "method": "bdev_nvme_set_hotplug", 01:25:47.578 "params": { 01:25:47.578 "enable": false, 01:25:47.578 "period_us": 100000 01:25:47.578 } 01:25:47.578 }, 01:25:47.578 { 01:25:47.578 "method": "bdev_malloc_create", 01:25:47.578 "params": { 01:25:47.578 "block_size": 4096, 01:25:47.578 "dif_is_head_of_md": false, 01:25:47.578 "dif_pi_format": 0, 01:25:47.578 "dif_type": 0, 01:25:47.578 "md_size": 0, 01:25:47.578 "name": "malloc0", 01:25:47.578 "num_blocks": 8192, 01:25:47.578 "optimal_io_boundary": 0, 01:25:47.578 "physical_block_size": 4096, 01:25:47.578 "uuid": "635d2ec3-a082-4e0f-80fa-33069990e9a1" 01:25:47.578 } 01:25:47.578 }, 01:25:47.578 { 01:25:47.578 "method": "bdev_wait_for_examine" 01:25:47.578 } 01:25:47.578 ] 01:25:47.578 }, 01:25:47.578 { 01:25:47.578 "subsystem": "nbd", 01:25:47.578 "config": [] 01:25:47.578 }, 01:25:47.578 { 01:25:47.578 "subsystem": "scheduler", 01:25:47.578 "config": [ 01:25:47.578 { 01:25:47.578 "method": "framework_set_scheduler", 01:25:47.578 "params": { 01:25:47.578 "name": "static" 01:25:47.578 } 01:25:47.578 } 01:25:47.578 ] 01:25:47.578 }, 01:25:47.578 { 01:25:47.578 "subsystem": "nvmf", 01:25:47.578 "config": [ 01:25:47.578 { 01:25:47.578 "method": "nvmf_set_config", 01:25:47.578 "params": { 01:25:47.578 "admin_cmd_passthru": { 01:25:47.578 "identify_ctrlr": false 01:25:47.578 }, 01:25:47.578 "dhchap_dhgroups": [ 01:25:47.578 "null", 01:25:47.578 "ffdhe2048", 01:25:47.578 "ffdhe3072", 01:25:47.578 "ffdhe4096", 01:25:47.578 "ffdhe6144", 01:25:47.578 "ffdhe8192" 01:25:47.578 ], 01:25:47.578 "dhchap_digests": [ 01:25:47.578 "sha256", 01:25:47.578 "sha384", 01:25:47.578 "sha512" 01:25:47.578 ], 01:25:47.578 "discovery_filter": "match_any" 01:25:47.578 } 01:25:47.578 }, 01:25:47.578 { 01:25:47.578 "method": "nvmf_set_max_subsystems", 01:25:47.578 "params": { 01:25:47.578 "max_subsystems": 1024 01:25:47.578 } 01:25:47.578 }, 01:25:47.578 { 01:25:47.578 "method": "nvmf_set_crdt", 01:25:47.578 "params": { 01:25:47.578 "crdt1": 0, 01:25:47.578 "crdt2": 0, 01:25:47.578 "crdt3": 0 01:25:47.578 } 01:25:47.578 }, 01:25:47.578 { 01:25:47.578 "method": "nvmf_create_transport", 01:25:47.578 "params": { 01:25:47.578 "abort_timeout_sec": 1, 01:25:47.578 "ack_timeout": 0, 01:25:47.578 "buf_cache_size": 4294967295, 01:25:47.578 "c2h_success": false, 01:25:47.578 "data_wr_pool_size": 0, 01:25:47.578 "dif_insert_or_strip": false, 01:25:47.578 "in_capsule_data_size": 4096, 01:25:47.578 "io_unit_size": 131072, 01:25:47.578 "max_aq_depth": 128, 01:25:47.578 "max_io_qpairs_per_ctrlr": 127, 01:25:47.578 "max_io_size": 131072, 01:25:47.578 "max_queue_depth": 128, 01:25:47.578 "num_shared_buffers": 511, 01:25:47.578 "sock_priority": 0, 01:25:47.578 "trtype": "TCP", 01:25:47.578 "zcopy": false 01:25:47.578 } 01:25:47.578 }, 01:25:47.578 { 01:25:47.578 "method": "nvmf_create_subsystem", 01:25:47.578 "params": { 01:25:47.578 "allow_any_host": false, 01:25:47.578 "ana_reporting": false, 01:25:47.578 "max_cntlid": 65519, 01:25:47.578 "max_namespaces": 10, 01:25:47.578 "min_cntlid": 1, 01:25:47.578 "model_number": "SPDK bdev Controller", 01:25:47.578 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:25:47.578 "serial_number": "SPDK00000000000001" 01:25:47.578 } 01:25:47.578 }, 01:25:47.578 { 01:25:47.578 "method": "nvmf_subsystem_add_host", 01:25:47.578 "params": { 01:25:47.578 "host": "nqn.2016-06.io.spdk:host1", 01:25:47.578 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:25:47.578 "psk": "key0" 01:25:47.578 } 01:25:47.578 }, 01:25:47.578 { 01:25:47.578 "method": "nvmf_subsystem_add_ns", 01:25:47.578 "params": { 01:25:47.578 "namespace": { 01:25:47.578 "bdev_name": "malloc0", 01:25:47.578 "nguid": "635D2EC3A0824E0F80FA33069990E9A1", 01:25:47.578 "no_auto_visible": false, 01:25:47.578 "nsid": 1, 01:25:47.578 "uuid": "635d2ec3-a082-4e0f-80fa-33069990e9a1" 01:25:47.578 }, 01:25:47.578 "nqn": "nqn.2016-06.io.spdk:cnode1" 01:25:47.578 } 01:25:47.578 }, 01:25:47.578 { 01:25:47.578 "method": "nvmf_subsystem_add_listener", 01:25:47.578 "params": { 01:25:47.578 "listen_address": { 01:25:47.578 "adrfam": "IPv4", 01:25:47.578 "traddr": "10.0.0.3", 01:25:47.578 "trsvcid": "4420", 01:25:47.578 "trtype": "TCP" 01:25:47.578 }, 01:25:47.578 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:25:47.578 "secure_channel": true 01:25:47.578 } 01:25:47.578 } 01:25:47.578 ] 01:25:47.578 } 01:25:47.578 ] 01:25:47.578 }' 01:25:47.578 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 01:25:47.836 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 01:25:47.836 "subsystems": [ 01:25:47.836 { 01:25:47.836 "subsystem": "keyring", 01:25:47.836 "config": [ 01:25:47.836 { 01:25:47.836 "method": "keyring_file_add_key", 01:25:47.836 "params": { 01:25:47.836 "name": "key0", 01:25:47.836 "path": "/tmp/tmp.oT7ReP1J2E" 01:25:47.836 } 01:25:47.836 } 01:25:47.836 ] 01:25:47.836 }, 01:25:47.836 { 01:25:47.836 "subsystem": "iobuf", 01:25:47.836 "config": [ 01:25:47.836 { 01:25:47.836 "method": "iobuf_set_options", 01:25:47.836 "params": { 01:25:47.836 "enable_numa": false, 01:25:47.836 "large_bufsize": 135168, 01:25:47.836 "large_pool_count": 1024, 01:25:47.836 "small_bufsize": 8192, 01:25:47.836 "small_pool_count": 8192 01:25:47.836 } 01:25:47.836 } 01:25:47.836 ] 01:25:47.836 }, 01:25:47.836 { 01:25:47.836 "subsystem": "sock", 01:25:47.836 "config": [ 01:25:47.836 { 01:25:47.836 "method": "sock_set_default_impl", 01:25:47.836 "params": { 01:25:47.836 "impl_name": "posix" 01:25:47.836 } 01:25:47.836 }, 01:25:47.836 { 01:25:47.836 "method": "sock_impl_set_options", 01:25:47.836 "params": { 01:25:47.837 "enable_ktls": false, 01:25:47.837 "enable_placement_id": 0, 01:25:47.837 "enable_quickack": false, 01:25:47.837 "enable_recv_pipe": true, 01:25:47.837 "enable_zerocopy_send_client": false, 01:25:47.837 "enable_zerocopy_send_server": true, 01:25:47.837 "impl_name": "ssl", 01:25:47.837 "recv_buf_size": 4096, 01:25:47.837 "send_buf_size": 4096, 01:25:47.837 "tls_version": 0, 01:25:47.837 "zerocopy_threshold": 0 01:25:47.837 } 01:25:47.837 }, 01:25:47.837 { 01:25:47.837 "method": "sock_impl_set_options", 01:25:47.837 "params": { 01:25:47.837 "enable_ktls": false, 01:25:47.837 "enable_placement_id": 0, 01:25:47.837 "enable_quickack": false, 01:25:47.837 "enable_recv_pipe": true, 01:25:47.837 "enable_zerocopy_send_client": false, 01:25:47.837 "enable_zerocopy_send_server": true, 01:25:47.837 "impl_name": "posix", 01:25:47.837 "recv_buf_size": 2097152, 01:25:47.837 "send_buf_size": 2097152, 01:25:47.837 "tls_version": 0, 01:25:47.837 "zerocopy_threshold": 0 01:25:47.837 } 01:25:47.837 } 01:25:47.837 ] 01:25:47.837 }, 01:25:47.837 { 01:25:47.837 "subsystem": "vmd", 01:25:47.837 "config": [] 01:25:47.837 }, 01:25:47.837 { 01:25:47.837 "subsystem": "accel", 01:25:47.837 "config": [ 01:25:47.837 { 01:25:47.837 "method": "accel_set_options", 01:25:47.837 "params": { 01:25:47.837 "buf_count": 2048, 01:25:47.837 "large_cache_size": 16, 01:25:47.837 "sequence_count": 2048, 01:25:47.837 "small_cache_size": 128, 01:25:47.837 "task_count": 2048 01:25:47.837 } 01:25:47.837 } 01:25:47.837 ] 01:25:47.837 }, 01:25:47.837 { 01:25:47.837 "subsystem": "bdev", 01:25:47.837 "config": [ 01:25:47.837 { 01:25:47.837 "method": "bdev_set_options", 01:25:47.837 "params": { 01:25:47.837 "bdev_auto_examine": true, 01:25:47.837 "bdev_io_cache_size": 256, 01:25:47.837 "bdev_io_pool_size": 65535, 01:25:47.837 "iobuf_large_cache_size": 16, 01:25:47.837 "iobuf_small_cache_size": 128 01:25:47.837 } 01:25:47.837 }, 01:25:47.837 { 01:25:47.837 "method": "bdev_raid_set_options", 01:25:47.837 "params": { 01:25:47.837 "process_max_bandwidth_mb_sec": 0, 01:25:47.837 "process_window_size_kb": 1024 01:25:47.837 } 01:25:47.837 }, 01:25:47.837 { 01:25:47.837 "method": "bdev_iscsi_set_options", 01:25:47.837 "params": { 01:25:47.837 "timeout_sec": 30 01:25:47.837 } 01:25:47.837 }, 01:25:47.837 { 01:25:47.837 "method": "bdev_nvme_set_options", 01:25:47.837 "params": { 01:25:47.837 "action_on_timeout": "none", 01:25:47.837 "allow_accel_sequence": false, 01:25:47.837 "arbitration_burst": 0, 01:25:47.837 "bdev_retry_count": 3, 01:25:47.837 "ctrlr_loss_timeout_sec": 0, 01:25:47.837 "delay_cmd_submit": true, 01:25:47.837 "dhchap_dhgroups": [ 01:25:47.837 "null", 01:25:47.837 "ffdhe2048", 01:25:47.837 "ffdhe3072", 01:25:47.837 "ffdhe4096", 01:25:47.837 "ffdhe6144", 01:25:47.837 "ffdhe8192" 01:25:47.837 ], 01:25:47.837 "dhchap_digests": [ 01:25:47.837 "sha256", 01:25:47.837 "sha384", 01:25:47.837 "sha512" 01:25:47.837 ], 01:25:47.837 "disable_auto_failback": false, 01:25:47.837 "fast_io_fail_timeout_sec": 0, 01:25:47.837 "generate_uuids": false, 01:25:47.837 "high_priority_weight": 0, 01:25:47.837 "io_path_stat": false, 01:25:47.837 "io_queue_requests": 512, 01:25:47.837 "keep_alive_timeout_ms": 10000, 01:25:47.837 "low_priority_weight": 0, 01:25:47.837 "medium_priority_weight": 0, 01:25:47.837 "nvme_adminq_poll_period_us": 10000, 01:25:47.837 "nvme_error_stat": false, 01:25:47.837 "nvme_ioq_poll_period_us": 0, 01:25:47.837 "rdma_cm_event_timeout_ms": 0, 01:25:47.837 "rdma_max_cq_size": 0, 01:25:47.837 "rdma_srq_size": 0, 01:25:47.837 "reconnect_delay_sec": 0, 01:25:47.837 "timeout_admin_us": 0, 01:25:47.837 "timeout_us": 0, 01:25:47.837 "transport_ack_timeout": 0, 01:25:47.837 "transport_retry_count": 4, 01:25:47.837 "transport_tos": 0 01:25:47.837 } 01:25:47.837 }, 01:25:47.837 { 01:25:47.837 "method": "bdev_nvme_attach_controller", 01:25:47.837 "params": { 01:25:47.837 "adrfam": "IPv4", 01:25:47.837 "ctrlr_loss_timeout_sec": 0, 01:25:47.837 "ddgst": false, 01:25:47.837 "fast_io_fail_timeout_sec": 0, 01:25:47.837 "hdgst": false, 01:25:47.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:25:47.837 "multipath": "multipath", 01:25:47.837 "name": "TLSTEST", 01:25:47.837 "prchk_guard": false, 01:25:47.837 "prchk_reftag": false, 01:25:47.837 "psk": "key0", 01:25:47.837 "reconnect_delay_sec": 0, 01:25:47.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:25:47.837 "traddr": "10.0.0.3", 01:25:47.837 "trsvcid": "4420", 01:25:47.837 "trtype": "TCP" 01:25:47.837 } 01:25:47.837 }, 01:25:47.837 { 01:25:47.837 "method": "bdev_nvme_set_hotplug", 01:25:47.837 "params": { 01:25:47.837 "enable": false, 01:25:47.837 "period_us": 100000 01:25:47.837 } 01:25:47.837 }, 01:25:47.837 { 01:25:47.837 "method": "bdev_wait_for_examine" 01:25:47.837 } 01:25:47.837 ] 01:25:47.837 }, 01:25:47.837 { 01:25:47.837 "subsystem": "nbd", 01:25:47.837 "config": [] 01:25:47.837 } 01:25:47.837 ] 01:25:47.837 }' 01:25:47.837 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 82929 01:25:47.837 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82929 ']' 01:25:47.837 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82929 01:25:47.837 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:25:47.837 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:47.837 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82929 01:25:47.837 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:25:47.837 killing process with pid 82929 01:25:47.837 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:25:47.837 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82929' 01:25:47.837 Received shutdown signal, test time was about 10.000000 seconds 01:25:47.837 01:25:47.837 Latency(us) 01:25:47.837 [2024-12-09T05:20:39.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:25:47.837 [2024-12-09T05:20:39.454Z] =================================================================================================================== 01:25:47.837 [2024-12-09T05:20:39.454Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:25:47.837 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82929 01:25:47.837 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82929 01:25:48.095 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 82814 01:25:48.095 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82814 ']' 01:25:48.095 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82814 01:25:48.096 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:25:48.096 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:48.096 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82814 01:25:48.096 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:25:48.096 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:25:48.096 killing process with pid 82814 01:25:48.096 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82814' 01:25:48.096 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82814 01:25:48.096 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82814 01:25:48.096 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 01:25:48.096 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:25:48.096 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:25:48.096 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:25:48.096 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 01:25:48.096 "subsystems": [ 01:25:48.096 { 01:25:48.096 "subsystem": "keyring", 01:25:48.096 "config": [ 01:25:48.096 { 01:25:48.096 "method": "keyring_file_add_key", 01:25:48.096 "params": { 01:25:48.096 "name": "key0", 01:25:48.096 "path": "/tmp/tmp.oT7ReP1J2E" 01:25:48.096 } 01:25:48.096 } 01:25:48.096 ] 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "subsystem": "iobuf", 01:25:48.096 "config": [ 01:25:48.096 { 01:25:48.096 "method": "iobuf_set_options", 01:25:48.096 "params": { 01:25:48.096 "enable_numa": false, 01:25:48.096 "large_bufsize": 135168, 01:25:48.096 "large_pool_count": 1024, 01:25:48.096 "small_bufsize": 8192, 01:25:48.096 "small_pool_count": 8192 01:25:48.096 } 01:25:48.096 } 01:25:48.096 ] 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "subsystem": "sock", 01:25:48.096 "config": [ 01:25:48.096 { 01:25:48.096 "method": "sock_set_default_impl", 01:25:48.096 "params": { 01:25:48.096 "impl_name": "posix" 01:25:48.096 } 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "method": "sock_impl_set_options", 01:25:48.096 "params": { 01:25:48.096 "enable_ktls": false, 01:25:48.096 "enable_placement_id": 0, 01:25:48.096 "enable_quickack": false, 01:25:48.096 "enable_recv_pipe": true, 01:25:48.096 "enable_zerocopy_send_client": false, 01:25:48.096 "enable_zerocopy_send_server": true, 01:25:48.096 "impl_name": "ssl", 01:25:48.096 "recv_buf_size": 4096, 01:25:48.096 "send_buf_size": 4096, 01:25:48.096 "tls_version": 0, 01:25:48.096 "zerocopy_threshold": 0 01:25:48.096 } 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "method": "sock_impl_set_options", 01:25:48.096 "params": { 01:25:48.096 "enable_ktls": false, 01:25:48.096 "enable_placement_id": 0, 01:25:48.096 "enable_quickack": false, 01:25:48.096 "enable_recv_pipe": true, 01:25:48.096 "enable_zerocopy_send_client": false, 01:25:48.096 "enable_zerocopy_send_server": true, 01:25:48.096 "impl_name": "posix", 01:25:48.096 "recv_buf_size": 2097152, 01:25:48.096 "send_buf_size": 2097152, 01:25:48.096 "tls_version": 0, 01:25:48.096 "zerocopy_threshold": 0 01:25:48.096 } 01:25:48.096 } 01:25:48.096 ] 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "subsystem": "vmd", 01:25:48.096 "config": [] 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "subsystem": "accel", 01:25:48.096 "config": [ 01:25:48.096 { 01:25:48.096 "method": "accel_set_options", 01:25:48.096 "params": { 01:25:48.096 "buf_count": 2048, 01:25:48.096 "large_cache_size": 16, 01:25:48.096 "sequence_count": 2048, 01:25:48.096 "small_cache_size": 128, 01:25:48.096 "task_count": 2048 01:25:48.096 } 01:25:48.096 } 01:25:48.096 ] 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "subsystem": "bdev", 01:25:48.096 "config": [ 01:25:48.096 { 01:25:48.096 "method": "bdev_set_options", 01:25:48.096 "params": { 01:25:48.096 "bdev_auto_examine": true, 01:25:48.096 "bdev_io_cache_size": 256, 01:25:48.096 "bdev_io_pool_size": 65535, 01:25:48.096 "iobuf_large_cache_size": 16, 01:25:48.096 "iobuf_small_cache_size": 128 01:25:48.096 } 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "method": "bdev_raid_set_options", 01:25:48.096 "params": { 01:25:48.096 "process_max_bandwidth_mb_sec": 0, 01:25:48.096 "process_window_size_kb": 1024 01:25:48.096 } 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "method": "bdev_iscsi_set_options", 01:25:48.096 "params": { 01:25:48.096 "timeout_sec": 30 01:25:48.096 } 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "method": "bdev_nvme_set_options", 01:25:48.096 "params": { 01:25:48.096 "action_on_timeout": "none", 01:25:48.096 "allow_accel_sequence": false, 01:25:48.096 "arbitration_burst": 0, 01:25:48.096 "bdev_retry_count": 3, 01:25:48.096 "ctrlr_loss_timeout_sec": 0, 01:25:48.096 "delay_cmd_submit": true, 01:25:48.096 "dhchap_dhgroups": [ 01:25:48.096 "null", 01:25:48.096 "ffdhe2048", 01:25:48.096 "ffdhe3072", 01:25:48.096 "ffdhe4096", 01:25:48.096 "ffdhe6144", 01:25:48.096 "ffdhe8192" 01:25:48.096 ], 01:25:48.096 "dhchap_digests": [ 01:25:48.096 "sha256", 01:25:48.096 "sha384", 01:25:48.096 "sha512" 01:25:48.096 ], 01:25:48.096 "disable_auto_failback": false, 01:25:48.096 "fast_io_fail_timeout_sec": 0, 01:25:48.096 "generate_uuids": false, 01:25:48.096 "high_priority_weight": 0, 01:25:48.096 "io_path_stat": false, 01:25:48.096 "io_queue_requests": 0, 01:25:48.096 "keep_alive_timeout_ms": 10000, 01:25:48.096 "low_priority_weight": 0, 01:25:48.096 "medium_priority_weight": 0, 01:25:48.096 "nvme_adminq_poll_period_us": 10000, 01:25:48.096 "nvme_error_stat": false, 01:25:48.096 "nvme_ioq_poll_period_us": 0, 01:25:48.096 "rdma_cm_event_timeout_ms": 0, 01:25:48.096 "rdma_max_cq_size": 0, 01:25:48.096 "rdma_srq_size": 0, 01:25:48.096 "reconnect_delay_sec": 0, 01:25:48.096 "timeout_admin_us": 0, 01:25:48.096 "timeout_us": 0, 01:25:48.096 "transport_ack_timeout": 0, 01:25:48.096 "transport_retry_count": 4, 01:25:48.096 "transport_tos": 0 01:25:48.096 } 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "method": "bdev_nvme_set_hotplug", 01:25:48.096 "params": { 01:25:48.096 "enable": false, 01:25:48.096 "period_us": 100000 01:25:48.096 } 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "method": "bdev_malloc_create", 01:25:48.096 "params": { 01:25:48.096 "block_size": 4096, 01:25:48.096 "dif_is_head_of_md": false, 01:25:48.096 "dif_pi_format": 0, 01:25:48.096 "dif_type": 0, 01:25:48.096 "md_size": 0, 01:25:48.096 "name": "malloc0", 01:25:48.096 "num_blocks": 8192, 01:25:48.096 "optimal_io_boundary": 0, 01:25:48.096 "physical_block_size": 4096, 01:25:48.096 "uuid": "635d2ec3-a082-4e0f-80fa-33069990e9a1" 01:25:48.096 } 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "method": "bdev_wait_for_examine" 01:25:48.096 } 01:25:48.096 ] 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "subsystem": "nbd", 01:25:48.096 "config": [] 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "subsystem": "scheduler", 01:25:48.096 "config": [ 01:25:48.096 { 01:25:48.096 "method": "framework_set_scheduler", 01:25:48.096 "params": { 01:25:48.096 "name": "static" 01:25:48.096 } 01:25:48.096 } 01:25:48.096 ] 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "subsystem": "nvmf", 01:25:48.096 "config": [ 01:25:48.096 { 01:25:48.096 "method": "nvmf_set_config", 01:25:48.096 "params": { 01:25:48.096 "admin_cmd_passthru": { 01:25:48.096 "identify_ctrlr": false 01:25:48.096 }, 01:25:48.096 "dhchap_dhgroups": [ 01:25:48.096 "null", 01:25:48.096 "ffdhe2048", 01:25:48.096 "ffdhe3072", 01:25:48.096 "ffdhe4096", 01:25:48.096 "ffdhe6144", 01:25:48.096 "ffdhe8192" 01:25:48.096 ], 01:25:48.096 "dhchap_digests": [ 01:25:48.096 "sha256", 01:25:48.096 "sha384", 01:25:48.096 "sha512" 01:25:48.096 ], 01:25:48.096 "discovery_filter": "match_any" 01:25:48.096 } 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "method": "nvmf_set_max_subsystems", 01:25:48.096 "params": { 01:25:48.096 "max_subsystems": 1024 01:25:48.096 } 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "method": "nvmf_set_crdt", 01:25:48.096 "params": { 01:25:48.096 "crdt1": 0, 01:25:48.096 "crdt2": 0, 01:25:48.096 "crdt3": 0 01:25:48.096 } 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "method": "nvmf_create_transport", 01:25:48.096 "params": { 01:25:48.096 "abort_timeout_sec": 1, 01:25:48.096 "ack_timeout": 0, 01:25:48.096 "buf_cache_size": 4294967295, 01:25:48.096 "c2h_success": false, 01:25:48.096 "data_wr_pool_size": 0, 01:25:48.096 "dif_insert_or_strip": false, 01:25:48.096 "in_capsule_data_size": 4096, 01:25:48.096 "io_unit_size": 131072, 01:25:48.096 "max_aq_depth": 128, 01:25:48.096 "max_io_qpairs_per_ctrlr": 127, 01:25:48.096 "max_io_size": 131072, 01:25:48.096 "max_queue_depth": 128, 01:25:48.096 "num_shared_buffers": 511, 01:25:48.096 "sock_priority": 0, 01:25:48.096 "trtype": "TCP", 01:25:48.096 "zcopy": false 01:25:48.096 } 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "method": "nvmf_create_subsystem", 01:25:48.096 "params": { 01:25:48.096 "allow_any_host": false, 01:25:48.096 "ana_reporting": false, 01:25:48.096 "max_cntlid": 65519, 01:25:48.096 "max_namespaces": 10, 01:25:48.096 "min_cntlid": 1, 01:25:48.096 "model_number": "SPDK bdev Controller", 01:25:48.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:25:48.096 "serial_number": "SPDK00000000000001" 01:25:48.096 } 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "method": "nvmf_subsystem_add_host", 01:25:48.096 "params": { 01:25:48.096 "host": "nqn.2016-06.io.spdk:host1", 01:25:48.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:25:48.096 "psk": "key0" 01:25:48.096 } 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "method": "nvmf_subsystem_add_ns", 01:25:48.096 "params": { 01:25:48.096 "namespace": { 01:25:48.096 "bdev_name": "malloc0", 01:25:48.096 "nguid": "635D2EC3A0824E0F80FA33069990E9A1", 01:25:48.096 "no_auto_visible": false, 01:25:48.096 "nsid": 1, 01:25:48.096 "uuid": "635d2ec3-a082-4e0f-80fa-33069990e9a1" 01:25:48.096 }, 01:25:48.096 "nqn": "nqn.2016-06.io.spdk:cnode1" 01:25:48.096 } 01:25:48.096 }, 01:25:48.096 { 01:25:48.096 "method": "nvmf_subsystem_add_listener", 01:25:48.096 "params": { 01:25:48.096 "listen_address": { 01:25:48.096 "adrfam": "IPv4", 01:25:48.096 "traddr": "10.0.0.3", 01:25:48.096 "trsvcid": "4420", 01:25:48.096 "trtype": "TCP" 01:25:48.096 }, 01:25:48.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:25:48.096 "secure_channel": true 01:25:48.096 } 01:25:48.096 } 01:25:48.096 ] 01:25:48.096 } 01:25:48.096 ] 01:25:48.096 }' 01:25:48.096 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83009 01:25:48.096 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83009 01:25:48.096 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83009 ']' 01:25:48.096 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 01:25:48.096 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:48.096 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:48.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:48.096 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:48.096 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:48.096 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:25:48.097 [2024-12-09 05:20:39.709326] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:48.097 [2024-12-09 05:20:39.709379] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:25:48.354 [2024-12-09 05:20:39.833855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:48.354 [2024-12-09 05:20:39.862989] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:25:48.354 [2024-12-09 05:20:39.863022] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:25:48.354 [2024-12-09 05:20:39.863028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:25:48.355 [2024-12-09 05:20:39.863032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:25:48.355 [2024-12-09 05:20:39.863035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:25:48.355 [2024-12-09 05:20:39.863263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:25:48.613 [2024-12-09 05:20:40.057279] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:25:48.613 [2024-12-09 05:20:40.089228] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:25:48.613 [2024-12-09 05:20:40.089359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:25:49.178 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:49.178 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:25:49.178 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:25:49.178 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:25:49.178 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:25:49.178 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:25:49.178 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=83052 01:25:49.178 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 83052 /var/tmp/bdevperf.sock 01:25:49.178 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83052 ']' 01:25:49.178 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:25:49.178 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:49.178 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:25:49.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:25:49.178 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:49.178 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:25:49.178 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 01:25:49.178 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 01:25:49.178 "subsystems": [ 01:25:49.178 { 01:25:49.178 "subsystem": "keyring", 01:25:49.178 "config": [ 01:25:49.178 { 01:25:49.178 "method": "keyring_file_add_key", 01:25:49.178 "params": { 01:25:49.178 "name": "key0", 01:25:49.178 "path": "/tmp/tmp.oT7ReP1J2E" 01:25:49.178 } 01:25:49.178 } 01:25:49.178 ] 01:25:49.178 }, 01:25:49.178 { 01:25:49.178 "subsystem": "iobuf", 01:25:49.178 "config": [ 01:25:49.178 { 01:25:49.178 "method": "iobuf_set_options", 01:25:49.178 "params": { 01:25:49.178 "enable_numa": false, 01:25:49.178 "large_bufsize": 135168, 01:25:49.178 "large_pool_count": 1024, 01:25:49.178 "small_bufsize": 8192, 01:25:49.178 "small_pool_count": 8192 01:25:49.178 } 01:25:49.178 } 01:25:49.178 ] 01:25:49.178 }, 01:25:49.178 { 01:25:49.178 "subsystem": "sock", 01:25:49.178 "config": [ 01:25:49.178 { 01:25:49.178 "method": "sock_set_default_impl", 01:25:49.178 "params": { 01:25:49.178 "impl_name": "posix" 01:25:49.178 } 01:25:49.178 }, 01:25:49.178 { 01:25:49.178 "method": "sock_impl_set_options", 01:25:49.178 "params": { 01:25:49.178 "enable_ktls": false, 01:25:49.178 "enable_placement_id": 0, 01:25:49.178 "enable_quickack": false, 01:25:49.178 "enable_recv_pipe": true, 01:25:49.178 "enable_zerocopy_send_client": false, 01:25:49.178 "enable_zerocopy_send_server": true, 01:25:49.178 "impl_name": "ssl", 01:25:49.178 "recv_buf_size": 4096, 01:25:49.178 "send_buf_size": 4096, 01:25:49.178 "tls_version": 0, 01:25:49.178 "zerocopy_threshold": 0 01:25:49.178 } 01:25:49.178 }, 01:25:49.178 { 01:25:49.178 "method": "sock_impl_set_options", 01:25:49.178 "params": { 01:25:49.178 "enable_ktls": false, 01:25:49.178 "enable_placement_id": 0, 01:25:49.178 "enable_quickack": false, 01:25:49.178 "enable_recv_pipe": true, 01:25:49.178 "enable_zerocopy_send_client": false, 01:25:49.179 "enable_zerocopy_send_server": true, 01:25:49.179 "impl_name": "posix", 01:25:49.179 "recv_buf_size": 2097152, 01:25:49.179 "send_buf_size": 2097152, 01:25:49.179 "tls_version": 0, 01:25:49.179 "zerocopy_threshold": 0 01:25:49.179 } 01:25:49.179 } 01:25:49.179 ] 01:25:49.179 }, 01:25:49.179 { 01:25:49.179 "subsystem": "vmd", 01:25:49.179 "config": [] 01:25:49.179 }, 01:25:49.179 { 01:25:49.179 "subsystem": "accel", 01:25:49.179 "config": [ 01:25:49.179 { 01:25:49.179 "method": "accel_set_options", 01:25:49.179 "params": { 01:25:49.179 "buf_count": 2048, 01:25:49.179 "large_cache_size": 16, 01:25:49.179 "sequence_count": 2048, 01:25:49.179 "small_cache_size": 128, 01:25:49.179 "task_count": 2048 01:25:49.179 } 01:25:49.179 } 01:25:49.179 ] 01:25:49.179 }, 01:25:49.179 { 01:25:49.179 "subsystem": "bdev", 01:25:49.179 "config": [ 01:25:49.179 { 01:25:49.179 "method": "bdev_set_options", 01:25:49.179 "params": { 01:25:49.179 "bdev_auto_examine": true, 01:25:49.179 "bdev_io_cache_size": 256, 01:25:49.179 "bdev_io_pool_size": 65535, 01:25:49.179 "iobuf_large_cache_size": 16, 01:25:49.179 "iobuf_small_cache_size": 128 01:25:49.179 } 01:25:49.179 }, 01:25:49.179 { 01:25:49.179 "method": "bdev_raid_set_options", 01:25:49.179 "params": { 01:25:49.179 "process_max_bandwidth_mb_sec": 0, 01:25:49.179 "process_window_size_kb": 1024 01:25:49.179 } 01:25:49.179 }, 01:25:49.179 { 01:25:49.179 "method": "bdev_iscsi_set_options", 01:25:49.179 "params": { 01:25:49.179 "timeout_sec": 30 01:25:49.179 } 01:25:49.179 }, 01:25:49.179 { 01:25:49.179 "method": "bdev_nvme_set_options", 01:25:49.179 "params": { 01:25:49.179 "action_on_timeout": "none", 01:25:49.179 "allow_accel_sequence": false, 01:25:49.179 "arbitration_burst": 0, 01:25:49.179 "bdev_retry_count": 3, 01:25:49.179 "ctrlr_loss_timeout_sec": 0, 01:25:49.179 "delay_cmd_submit": true, 01:25:49.179 "dhchap_dhgroups": [ 01:25:49.179 "null", 01:25:49.179 "ffdhe2048", 01:25:49.179 "ffdhe3072", 01:25:49.179 "ffdhe4096", 01:25:49.179 "ffdhe6144", 01:25:49.179 "ffdhe8192" 01:25:49.179 ], 01:25:49.179 "dhchap_digests": [ 01:25:49.179 "sha256", 01:25:49.179 "sha384", 01:25:49.179 "sha512" 01:25:49.179 ], 01:25:49.179 "disable_auto_failback": false, 01:25:49.179 "fast_io_fail_timeout_sec": 0, 01:25:49.179 "generate_uuids": false, 01:25:49.179 "high_priority_weight": 0, 01:25:49.179 "io_path_stat": false, 01:25:49.179 "io_queue_requests": 512, 01:25:49.179 "keep_alive_timeout_ms": 10000, 01:25:49.179 "low_priority_weight": 0, 01:25:49.179 "medium_priority_weight": 0, 01:25:49.179 "nvme_adminq_poll_period_us": 10000, 01:25:49.179 "nvme_error_stat": false, 01:25:49.179 "nvme_ioq_poll_period_us": 0, 01:25:49.179 "rdma_cm_event_timeout_ms": 0, 01:25:49.179 "rdma_max_cq_size": 0, 01:25:49.179 "rdma_srq_size": 0, 01:25:49.179 "reconnect_delay_sec": 0, 01:25:49.179 "timeout_admin_us": 0, 01:25:49.179 "timeout_us": 0, 01:25:49.179 "transport_ack_timeout": 0, 01:25:49.179 "transport_retry_count": 4, 01:25:49.179 "transport_tos": 0 01:25:49.179 } 01:25:49.179 }, 01:25:49.179 { 01:25:49.179 "method": "bdev_nvme_attach_controller", 01:25:49.179 "params": { 01:25:49.179 "adrfam": "IPv4", 01:25:49.179 "ctrlr_loss_timeout_sec": 0, 01:25:49.179 "ddgst": false, 01:25:49.179 "fast_io_fail_timeout_sec": 0, 01:25:49.179 "hdgst": false, 01:25:49.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:25:49.179 "multipath": "multipath", 01:25:49.179 "name": "TLSTEST", 01:25:49.179 "prchk_guard": false, 01:25:49.179 "prchk_reftag": false, 01:25:49.179 "psk": "key0", 01:25:49.179 "reconnect_delay_sec": 0, 01:25:49.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:25:49.179 "traddr": "10.0.0.3", 01:25:49.179 "trsvcid": "4420", 01:25:49.179 "trtype": "TCP" 01:25:49.179 } 01:25:49.179 }, 01:25:49.179 { 01:25:49.179 "method": "bdev_nvme_set_hotplug", 01:25:49.179 "params": { 01:25:49.179 "enable": false, 01:25:49.179 "period_us": 100000 01:25:49.179 } 01:25:49.179 }, 01:25:49.179 { 01:25:49.179 "method": "bdev_wait_for_examine" 01:25:49.179 } 01:25:49.179 ] 01:25:49.179 }, 01:25:49.179 { 01:25:49.179 "subsystem": "nbd", 01:25:49.179 "config": [] 01:25:49.179 } 01:25:49.179 ] 01:25:49.179 }' 01:25:49.179 [2024-12-09 05:20:40.610062] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:25:49.179 [2024-12-09 05:20:40.610122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83052 ] 01:25:49.179 [2024-12-09 05:20:40.744873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:49.179 [2024-12-09 05:20:40.775836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:25:49.437 [2024-12-09 05:20:40.910841] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:25:50.003 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:50.003 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:25:50.003 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 01:25:50.003 Running I/O for 10 seconds... 01:25:52.313 7357.00 IOPS, 28.74 MiB/s [2024-12-09T05:20:44.887Z] 7380.00 IOPS, 28.83 MiB/s [2024-12-09T05:20:45.822Z] 7382.33 IOPS, 28.84 MiB/s [2024-12-09T05:20:46.754Z] 7384.50 IOPS, 28.85 MiB/s [2024-12-09T05:20:47.686Z] 7388.20 IOPS, 28.86 MiB/s [2024-12-09T05:20:48.630Z] 7387.83 IOPS, 28.86 MiB/s [2024-12-09T05:20:50.002Z] 7389.86 IOPS, 28.87 MiB/s [2024-12-09T05:20:50.935Z] 7389.88 IOPS, 28.87 MiB/s [2024-12-09T05:20:51.869Z] 7386.89 IOPS, 28.86 MiB/s [2024-12-09T05:20:51.869Z] 7388.40 IOPS, 28.86 MiB/s 01:26:00.252 Latency(us) 01:26:00.252 [2024-12-09T05:20:51.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:26:00.252 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:26:00.252 Verification LBA range: start 0x0 length 0x2000 01:26:00.252 TLSTESTn1 : 10.01 7394.41 28.88 0.00 0.00 17282.88 3327.21 12905.55 01:26:00.252 [2024-12-09T05:20:51.869Z] =================================================================================================================== 01:26:00.252 [2024-12-09T05:20:51.869Z] Total : 7394.41 28.88 0.00 0.00 17282.88 3327.21 12905.55 01:26:00.252 { 01:26:00.252 "results": [ 01:26:00.252 { 01:26:00.252 "job": "TLSTESTn1", 01:26:00.252 "core_mask": "0x4", 01:26:00.252 "workload": "verify", 01:26:00.252 "status": "finished", 01:26:00.252 "verify_range": { 01:26:00.252 "start": 0, 01:26:00.252 "length": 8192 01:26:00.252 }, 01:26:00.252 "queue_depth": 128, 01:26:00.252 "io_size": 4096, 01:26:00.252 "runtime": 10.009184, 01:26:00.252 "iops": 7394.408974797546, 01:26:00.252 "mibps": 28.884410057802913, 01:26:00.252 "io_failed": 0, 01:26:00.252 "io_timeout": 0, 01:26:00.252 "avg_latency_us": 17282.88280569887, 01:26:00.252 "min_latency_us": 3327.2123076923076, 01:26:00.252 "max_latency_us": 12905.55076923077 01:26:00.252 } 01:26:00.252 ], 01:26:00.252 "core_count": 1 01:26:00.252 } 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 83052 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83052 ']' 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83052 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83052 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:26:00.252 killing process with pid 83052 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83052' 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83052 01:26:00.252 Received shutdown signal, test time was about 10.000000 seconds 01:26:00.252 01:26:00.252 Latency(us) 01:26:00.252 [2024-12-09T05:20:51.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:26:00.252 [2024-12-09T05:20:51.869Z] =================================================================================================================== 01:26:00.252 [2024-12-09T05:20:51.869Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83052 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 83009 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83009 ']' 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83009 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83009 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:26:00.252 killing process with pid 83009 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83009' 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83009 01:26:00.252 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83009 01:26:00.510 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 01:26:00.510 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:26:00.510 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:26:00.510 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:26:00.510 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83198 01:26:00.510 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83198 01:26:00.510 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83198 ']' 01:26:00.510 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:00.510 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:26:00.510 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:00.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:00.510 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:00.510 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:00.510 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:26:00.510 [2024-12-09 05:20:51.956102] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:00.510 [2024-12-09 05:20:51.956170] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:26:00.510 [2024-12-09 05:20:52.095965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:00.767 [2024-12-09 05:20:52.130735] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:26:00.767 [2024-12-09 05:20:52.130888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:26:00.767 [2024-12-09 05:20:52.130946] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:26:00.767 [2024-12-09 05:20:52.130973] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:26:00.767 [2024-12-09 05:20:52.130988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:26:00.767 [2024-12-09 05:20:52.131252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:01.333 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:01.333 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:26:01.333 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:26:01.333 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:26:01.333 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:26:01.333 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:26:01.333 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.oT7ReP1J2E 01:26:01.333 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.oT7ReP1J2E 01:26:01.333 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:26:01.590 [2024-12-09 05:20:52.966147] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:26:01.590 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:26:01.590 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 01:26:01.853 [2024-12-09 05:20:53.378189] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:26:01.853 [2024-12-09 05:20:53.378382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:26:01.853 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:26:02.111 malloc0 01:26:02.111 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:26:02.369 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.oT7ReP1J2E 01:26:02.627 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 01:26:02.627 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=83302 01:26:02.627 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:26:02.627 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 01:26:02.627 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 83302 /var/tmp/bdevperf.sock 01:26:02.627 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83302 ']' 01:26:02.627 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:26:02.627 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:02.627 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:26:02.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:26:02.627 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:02.627 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:26:02.627 [2024-12-09 05:20:54.226412] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:02.627 [2024-12-09 05:20:54.226470] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83302 ] 01:26:02.885 [2024-12-09 05:20:54.364127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:02.885 [2024-12-09 05:20:54.394783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:26:03.820 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:03.820 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:26:03.820 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oT7ReP1J2E 01:26:03.820 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 01:26:04.077 [2024-12-09 05:20:55.496010] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:26:04.077 nvme0n1 01:26:04.077 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:26:04.077 Running I/O for 1 seconds... 01:26:05.448 7329.00 IOPS, 28.63 MiB/s 01:26:05.448 Latency(us) 01:26:05.448 [2024-12-09T05:20:57.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:26:05.448 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:26:05.448 Verification LBA range: start 0x0 length 0x2000 01:26:05.448 nvme0n1 : 1.01 7390.45 28.87 0.00 0.00 17216.88 3554.07 13712.15 01:26:05.448 [2024-12-09T05:20:57.065Z] =================================================================================================================== 01:26:05.448 [2024-12-09T05:20:57.065Z] Total : 7390.45 28.87 0.00 0.00 17216.88 3554.07 13712.15 01:26:05.448 { 01:26:05.448 "results": [ 01:26:05.448 { 01:26:05.448 "job": "nvme0n1", 01:26:05.449 "core_mask": "0x2", 01:26:05.449 "workload": "verify", 01:26:05.449 "status": "finished", 01:26:05.449 "verify_range": { 01:26:05.449 "start": 0, 01:26:05.449 "length": 8192 01:26:05.449 }, 01:26:05.449 "queue_depth": 128, 01:26:05.449 "io_size": 4096, 01:26:05.449 "runtime": 1.00914, 01:26:05.449 "iops": 7390.451275343362, 01:26:05.449 "mibps": 28.868950294310007, 01:26:05.449 "io_failed": 0, 01:26:05.449 "io_timeout": 0, 01:26:05.449 "avg_latency_us": 17216.877836499785, 01:26:05.449 "min_latency_us": 3554.067692307692, 01:26:05.449 "max_latency_us": 13712.147692307692 01:26:05.449 } 01:26:05.449 ], 01:26:05.449 "core_count": 1 01:26:05.449 } 01:26:05.449 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 83302 01:26:05.449 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83302 ']' 01:26:05.449 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83302 01:26:05.449 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:26:05.449 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:05.449 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83302 01:26:05.449 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:26:05.449 killing process with pid 83302 01:26:05.449 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:26:05.449 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83302' 01:26:05.449 Received shutdown signal, test time was about 1.000000 seconds 01:26:05.449 01:26:05.449 Latency(us) 01:26:05.449 [2024-12-09T05:20:57.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:26:05.449 [2024-12-09T05:20:57.066Z] =================================================================================================================== 01:26:05.449 [2024-12-09T05:20:57.066Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:26:05.449 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83302 01:26:05.449 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83302 01:26:05.449 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 83198 01:26:05.449 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83198 ']' 01:26:05.449 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83198 01:26:05.449 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:26:05.449 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:05.449 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83198 01:26:05.449 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:05.449 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:05.449 killing process with pid 83198 01:26:05.449 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83198' 01:26:05.449 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83198 01:26:05.449 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83198 01:26:05.449 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 01:26:05.449 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:26:05.449 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:26:05.449 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:26:05.449 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83377 01:26:05.449 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:26:05.449 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83377 01:26:05.449 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83377 ']' 01:26:05.449 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:05.449 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:05.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:05.449 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:05.449 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:05.449 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:26:05.449 [2024-12-09 05:20:57.053320] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:05.449 [2024-12-09 05:20:57.053379] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:26:05.722 [2024-12-09 05:20:57.191911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:05.722 [2024-12-09 05:20:57.221336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:26:05.722 [2024-12-09 05:20:57.221371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:26:05.722 [2024-12-09 05:20:57.221376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:26:05.722 [2024-12-09 05:20:57.221380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:26:05.722 [2024-12-09 05:20:57.221383] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:26:05.722 [2024-12-09 05:20:57.221588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:06.324 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:06.324 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:26:06.324 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:26:06.324 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:26:06.324 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:26:06.583 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:26:06.583 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 01:26:06.583 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:06.583 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:26:06.583 [2024-12-09 05:20:57.966771] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:26:06.583 malloc0 01:26:06.583 [2024-12-09 05:20:57.992467] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:26:06.583 [2024-12-09 05:20:57.992603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:26:06.583 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:06.583 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=83426 01:26:06.583 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 83426 /var/tmp/bdevperf.sock 01:26:06.583 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83426 ']' 01:26:06.583 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 01:26:06.583 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:26:06.583 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:06.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:26:06.583 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:26:06.583 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:06.583 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:26:06.583 [2024-12-09 05:20:58.060481] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:06.583 [2024-12-09 05:20:58.060538] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83426 ] 01:26:06.583 [2024-12-09 05:20:58.191577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:06.841 [2024-12-09 05:20:58.222004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:26:07.406 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:07.406 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:26:07.406 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oT7ReP1J2E 01:26:07.663 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 01:26:07.920 [2024-12-09 05:20:59.287440] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:26:07.920 nvme0n1 01:26:07.920 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:26:07.920 Running I/O for 1 seconds... 01:26:09.111 7277.00 IOPS, 28.43 MiB/s 01:26:09.111 Latency(us) 01:26:09.111 [2024-12-09T05:21:00.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:26:09.112 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:26:09.112 Verification LBA range: start 0x0 length 0x2000 01:26:09.112 nvme0n1 : 1.01 7338.53 28.67 0.00 0.00 17338.37 3377.62 12855.14 01:26:09.112 [2024-12-09T05:21:00.729Z] =================================================================================================================== 01:26:09.112 [2024-12-09T05:21:00.729Z] Total : 7338.53 28.67 0.00 0.00 17338.37 3377.62 12855.14 01:26:09.112 { 01:26:09.112 "results": [ 01:26:09.112 { 01:26:09.112 "job": "nvme0n1", 01:26:09.112 "core_mask": "0x2", 01:26:09.112 "workload": "verify", 01:26:09.112 "status": "finished", 01:26:09.112 "verify_range": { 01:26:09.112 "start": 0, 01:26:09.112 "length": 8192 01:26:09.112 }, 01:26:09.112 "queue_depth": 128, 01:26:09.112 "io_size": 4096, 01:26:09.112 "runtime": 1.009057, 01:26:09.112 "iops": 7338.534889505747, 01:26:09.112 "mibps": 28.666151912131824, 01:26:09.112 "io_failed": 0, 01:26:09.112 "io_timeout": 0, 01:26:09.112 "avg_latency_us": 17338.36693626967, 01:26:09.112 "min_latency_us": 3377.6246153846155, 01:26:09.112 "max_latency_us": 12855.138461538461 01:26:09.112 } 01:26:09.112 ], 01:26:09.112 "core_count": 1 01:26:09.112 } 01:26:09.112 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 01:26:09.112 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:09.112 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:26:09.112 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:09.112 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 01:26:09.112 "subsystems": [ 01:26:09.112 { 01:26:09.112 "subsystem": "keyring", 01:26:09.112 "config": [ 01:26:09.112 { 01:26:09.112 "method": "keyring_file_add_key", 01:26:09.112 "params": { 01:26:09.112 "name": "key0", 01:26:09.112 "path": "/tmp/tmp.oT7ReP1J2E" 01:26:09.112 } 01:26:09.112 } 01:26:09.112 ] 01:26:09.112 }, 01:26:09.112 { 01:26:09.112 "subsystem": "iobuf", 01:26:09.112 "config": [ 01:26:09.112 { 01:26:09.112 "method": "iobuf_set_options", 01:26:09.112 "params": { 01:26:09.112 "enable_numa": false, 01:26:09.112 "large_bufsize": 135168, 01:26:09.112 "large_pool_count": 1024, 01:26:09.112 "small_bufsize": 8192, 01:26:09.112 "small_pool_count": 8192 01:26:09.112 } 01:26:09.112 } 01:26:09.112 ] 01:26:09.112 }, 01:26:09.112 { 01:26:09.112 "subsystem": "sock", 01:26:09.112 "config": [ 01:26:09.112 { 01:26:09.112 "method": "sock_set_default_impl", 01:26:09.112 "params": { 01:26:09.112 "impl_name": "posix" 01:26:09.112 } 01:26:09.112 }, 01:26:09.112 { 01:26:09.112 "method": "sock_impl_set_options", 01:26:09.112 "params": { 01:26:09.112 "enable_ktls": false, 01:26:09.112 "enable_placement_id": 0, 01:26:09.112 "enable_quickack": false, 01:26:09.112 "enable_recv_pipe": true, 01:26:09.112 "enable_zerocopy_send_client": false, 01:26:09.112 "enable_zerocopy_send_server": true, 01:26:09.112 "impl_name": "ssl", 01:26:09.112 "recv_buf_size": 4096, 01:26:09.112 "send_buf_size": 4096, 01:26:09.112 "tls_version": 0, 01:26:09.112 "zerocopy_threshold": 0 01:26:09.112 } 01:26:09.112 }, 01:26:09.112 { 01:26:09.112 "method": "sock_impl_set_options", 01:26:09.112 "params": { 01:26:09.112 "enable_ktls": false, 01:26:09.112 "enable_placement_id": 0, 01:26:09.112 "enable_quickack": false, 01:26:09.112 "enable_recv_pipe": true, 01:26:09.112 "enable_zerocopy_send_client": false, 01:26:09.112 "enable_zerocopy_send_server": true, 01:26:09.112 "impl_name": "posix", 01:26:09.112 "recv_buf_size": 2097152, 01:26:09.112 "send_buf_size": 2097152, 01:26:09.112 "tls_version": 0, 01:26:09.112 "zerocopy_threshold": 0 01:26:09.112 } 01:26:09.112 } 01:26:09.112 ] 01:26:09.112 }, 01:26:09.112 { 01:26:09.112 "subsystem": "vmd", 01:26:09.112 "config": [] 01:26:09.112 }, 01:26:09.112 { 01:26:09.112 "subsystem": "accel", 01:26:09.112 "config": [ 01:26:09.112 { 01:26:09.112 "method": "accel_set_options", 01:26:09.112 "params": { 01:26:09.112 "buf_count": 2048, 01:26:09.112 "large_cache_size": 16, 01:26:09.112 "sequence_count": 2048, 01:26:09.112 "small_cache_size": 128, 01:26:09.112 "task_count": 2048 01:26:09.112 } 01:26:09.112 } 01:26:09.112 ] 01:26:09.112 }, 01:26:09.112 { 01:26:09.112 "subsystem": "bdev", 01:26:09.112 "config": [ 01:26:09.112 { 01:26:09.112 "method": "bdev_set_options", 01:26:09.112 "params": { 01:26:09.112 "bdev_auto_examine": true, 01:26:09.112 "bdev_io_cache_size": 256, 01:26:09.112 "bdev_io_pool_size": 65535, 01:26:09.112 "iobuf_large_cache_size": 16, 01:26:09.112 "iobuf_small_cache_size": 128 01:26:09.112 } 01:26:09.112 }, 01:26:09.112 { 01:26:09.112 "method": "bdev_raid_set_options", 01:26:09.112 "params": { 01:26:09.112 "process_max_bandwidth_mb_sec": 0, 01:26:09.112 "process_window_size_kb": 1024 01:26:09.112 } 01:26:09.112 }, 01:26:09.112 { 01:26:09.112 "method": "bdev_iscsi_set_options", 01:26:09.112 "params": { 01:26:09.112 "timeout_sec": 30 01:26:09.112 } 01:26:09.112 }, 01:26:09.112 { 01:26:09.112 "method": "bdev_nvme_set_options", 01:26:09.112 "params": { 01:26:09.112 "action_on_timeout": "none", 01:26:09.112 "allow_accel_sequence": false, 01:26:09.112 "arbitration_burst": 0, 01:26:09.112 "bdev_retry_count": 3, 01:26:09.112 "ctrlr_loss_timeout_sec": 0, 01:26:09.112 "delay_cmd_submit": true, 01:26:09.112 "dhchap_dhgroups": [ 01:26:09.112 "null", 01:26:09.112 "ffdhe2048", 01:26:09.112 "ffdhe3072", 01:26:09.112 "ffdhe4096", 01:26:09.112 "ffdhe6144", 01:26:09.112 "ffdhe8192" 01:26:09.112 ], 01:26:09.112 "dhchap_digests": [ 01:26:09.112 "sha256", 01:26:09.112 "sha384", 01:26:09.112 "sha512" 01:26:09.112 ], 01:26:09.112 "disable_auto_failback": false, 01:26:09.112 "fast_io_fail_timeout_sec": 0, 01:26:09.112 "generate_uuids": false, 01:26:09.112 "high_priority_weight": 0, 01:26:09.112 "io_path_stat": false, 01:26:09.112 "io_queue_requests": 0, 01:26:09.112 "keep_alive_timeout_ms": 10000, 01:26:09.112 "low_priority_weight": 0, 01:26:09.112 "medium_priority_weight": 0, 01:26:09.112 "nvme_adminq_poll_period_us": 10000, 01:26:09.112 "nvme_error_stat": false, 01:26:09.112 "nvme_ioq_poll_period_us": 0, 01:26:09.112 "rdma_cm_event_timeout_ms": 0, 01:26:09.112 "rdma_max_cq_size": 0, 01:26:09.112 "rdma_srq_size": 0, 01:26:09.112 "reconnect_delay_sec": 0, 01:26:09.112 "timeout_admin_us": 0, 01:26:09.112 "timeout_us": 0, 01:26:09.112 "transport_ack_timeout": 0, 01:26:09.112 "transport_retry_count": 4, 01:26:09.112 "transport_tos": 0 01:26:09.112 } 01:26:09.112 }, 01:26:09.112 { 01:26:09.112 "method": "bdev_nvme_set_hotplug", 01:26:09.112 "params": { 01:26:09.112 "enable": false, 01:26:09.112 "period_us": 100000 01:26:09.112 } 01:26:09.112 }, 01:26:09.112 { 01:26:09.112 "method": "bdev_malloc_create", 01:26:09.112 "params": { 01:26:09.112 "block_size": 4096, 01:26:09.112 "dif_is_head_of_md": false, 01:26:09.112 "dif_pi_format": 0, 01:26:09.112 "dif_type": 0, 01:26:09.112 "md_size": 0, 01:26:09.112 "name": "malloc0", 01:26:09.112 "num_blocks": 8192, 01:26:09.112 "optimal_io_boundary": 0, 01:26:09.112 "physical_block_size": 4096, 01:26:09.112 "uuid": "54c4918a-6f2d-459b-9acc-dc07274db174" 01:26:09.112 } 01:26:09.112 }, 01:26:09.112 { 01:26:09.112 "method": "bdev_wait_for_examine" 01:26:09.112 } 01:26:09.112 ] 01:26:09.112 }, 01:26:09.112 { 01:26:09.112 "subsystem": "nbd", 01:26:09.112 "config": [] 01:26:09.112 }, 01:26:09.112 { 01:26:09.112 "subsystem": "scheduler", 01:26:09.112 "config": [ 01:26:09.112 { 01:26:09.112 "method": "framework_set_scheduler", 01:26:09.112 "params": { 01:26:09.112 "name": "static" 01:26:09.112 } 01:26:09.112 } 01:26:09.112 ] 01:26:09.112 }, 01:26:09.112 { 01:26:09.112 "subsystem": "nvmf", 01:26:09.112 "config": [ 01:26:09.112 { 01:26:09.112 "method": "nvmf_set_config", 01:26:09.112 "params": { 01:26:09.112 "admin_cmd_passthru": { 01:26:09.112 "identify_ctrlr": false 01:26:09.112 }, 01:26:09.112 "dhchap_dhgroups": [ 01:26:09.112 "null", 01:26:09.112 "ffdhe2048", 01:26:09.112 "ffdhe3072", 01:26:09.112 "ffdhe4096", 01:26:09.112 "ffdhe6144", 01:26:09.112 "ffdhe8192" 01:26:09.112 ], 01:26:09.112 "dhchap_digests": [ 01:26:09.112 "sha256", 01:26:09.112 "sha384", 01:26:09.112 "sha512" 01:26:09.112 ], 01:26:09.112 "discovery_filter": "match_any" 01:26:09.112 } 01:26:09.112 }, 01:26:09.112 { 01:26:09.113 "method": "nvmf_set_max_subsystems", 01:26:09.113 "params": { 01:26:09.113 "max_subsystems": 1024 01:26:09.113 } 01:26:09.113 }, 01:26:09.113 { 01:26:09.113 "method": "nvmf_set_crdt", 01:26:09.113 "params": { 01:26:09.113 "crdt1": 0, 01:26:09.113 "crdt2": 0, 01:26:09.113 "crdt3": 0 01:26:09.113 } 01:26:09.113 }, 01:26:09.113 { 01:26:09.113 "method": "nvmf_create_transport", 01:26:09.113 "params": { 01:26:09.113 "abort_timeout_sec": 1, 01:26:09.113 "ack_timeout": 0, 01:26:09.113 "buf_cache_size": 4294967295, 01:26:09.113 "c2h_success": false, 01:26:09.113 "data_wr_pool_size": 0, 01:26:09.113 "dif_insert_or_strip": false, 01:26:09.113 "in_capsule_data_size": 4096, 01:26:09.113 "io_unit_size": 131072, 01:26:09.113 "max_aq_depth": 128, 01:26:09.113 "max_io_qpairs_per_ctrlr": 127, 01:26:09.113 "max_io_size": 131072, 01:26:09.113 "max_queue_depth": 128, 01:26:09.113 "num_shared_buffers": 511, 01:26:09.113 "sock_priority": 0, 01:26:09.113 "trtype": "TCP", 01:26:09.113 "zcopy": false 01:26:09.113 } 01:26:09.113 }, 01:26:09.113 { 01:26:09.113 "method": "nvmf_create_subsystem", 01:26:09.113 "params": { 01:26:09.113 "allow_any_host": false, 01:26:09.113 "ana_reporting": false, 01:26:09.113 "max_cntlid": 65519, 01:26:09.113 "max_namespaces": 32, 01:26:09.113 "min_cntlid": 1, 01:26:09.113 "model_number": "SPDK bdev Controller", 01:26:09.113 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:26:09.113 "serial_number": "00000000000000000000" 01:26:09.113 } 01:26:09.113 }, 01:26:09.113 { 01:26:09.113 "method": "nvmf_subsystem_add_host", 01:26:09.113 "params": { 01:26:09.113 "host": "nqn.2016-06.io.spdk:host1", 01:26:09.113 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:26:09.113 "psk": "key0" 01:26:09.113 } 01:26:09.113 }, 01:26:09.113 { 01:26:09.113 "method": "nvmf_subsystem_add_ns", 01:26:09.113 "params": { 01:26:09.113 "namespace": { 01:26:09.113 "bdev_name": "malloc0", 01:26:09.113 "nguid": "54C4918A6F2D459B9ACCDC07274DB174", 01:26:09.113 "no_auto_visible": false, 01:26:09.113 "nsid": 1, 01:26:09.113 "uuid": "54c4918a-6f2d-459b-9acc-dc07274db174" 01:26:09.113 }, 01:26:09.113 "nqn": "nqn.2016-06.io.spdk:cnode1" 01:26:09.113 } 01:26:09.113 }, 01:26:09.113 { 01:26:09.113 "method": "nvmf_subsystem_add_listener", 01:26:09.113 "params": { 01:26:09.113 "listen_address": { 01:26:09.113 "adrfam": "IPv4", 01:26:09.113 "traddr": "10.0.0.3", 01:26:09.113 "trsvcid": "4420", 01:26:09.113 "trtype": "TCP" 01:26:09.113 }, 01:26:09.113 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:26:09.113 "secure_channel": false, 01:26:09.113 "sock_impl": "ssl" 01:26:09.113 } 01:26:09.113 } 01:26:09.113 ] 01:26:09.113 } 01:26:09.113 ] 01:26:09.113 }' 01:26:09.113 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 01:26:09.371 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 01:26:09.371 "subsystems": [ 01:26:09.371 { 01:26:09.371 "subsystem": "keyring", 01:26:09.371 "config": [ 01:26:09.371 { 01:26:09.371 "method": "keyring_file_add_key", 01:26:09.371 "params": { 01:26:09.371 "name": "key0", 01:26:09.371 "path": "/tmp/tmp.oT7ReP1J2E" 01:26:09.371 } 01:26:09.371 } 01:26:09.371 ] 01:26:09.371 }, 01:26:09.371 { 01:26:09.371 "subsystem": "iobuf", 01:26:09.371 "config": [ 01:26:09.371 { 01:26:09.371 "method": "iobuf_set_options", 01:26:09.371 "params": { 01:26:09.371 "enable_numa": false, 01:26:09.371 "large_bufsize": 135168, 01:26:09.371 "large_pool_count": 1024, 01:26:09.371 "small_bufsize": 8192, 01:26:09.371 "small_pool_count": 8192 01:26:09.371 } 01:26:09.371 } 01:26:09.371 ] 01:26:09.371 }, 01:26:09.371 { 01:26:09.371 "subsystem": "sock", 01:26:09.371 "config": [ 01:26:09.371 { 01:26:09.371 "method": "sock_set_default_impl", 01:26:09.371 "params": { 01:26:09.371 "impl_name": "posix" 01:26:09.371 } 01:26:09.371 }, 01:26:09.371 { 01:26:09.371 "method": "sock_impl_set_options", 01:26:09.371 "params": { 01:26:09.371 "enable_ktls": false, 01:26:09.371 "enable_placement_id": 0, 01:26:09.371 "enable_quickack": false, 01:26:09.371 "enable_recv_pipe": true, 01:26:09.371 "enable_zerocopy_send_client": false, 01:26:09.371 "enable_zerocopy_send_server": true, 01:26:09.371 "impl_name": "ssl", 01:26:09.371 "recv_buf_size": 4096, 01:26:09.371 "send_buf_size": 4096, 01:26:09.371 "tls_version": 0, 01:26:09.371 "zerocopy_threshold": 0 01:26:09.371 } 01:26:09.371 }, 01:26:09.371 { 01:26:09.371 "method": "sock_impl_set_options", 01:26:09.371 "params": { 01:26:09.371 "enable_ktls": false, 01:26:09.371 "enable_placement_id": 0, 01:26:09.371 "enable_quickack": false, 01:26:09.371 "enable_recv_pipe": true, 01:26:09.371 "enable_zerocopy_send_client": false, 01:26:09.371 "enable_zerocopy_send_server": true, 01:26:09.371 "impl_name": "posix", 01:26:09.371 "recv_buf_size": 2097152, 01:26:09.371 "send_buf_size": 2097152, 01:26:09.371 "tls_version": 0, 01:26:09.371 "zerocopy_threshold": 0 01:26:09.371 } 01:26:09.371 } 01:26:09.371 ] 01:26:09.371 }, 01:26:09.371 { 01:26:09.371 "subsystem": "vmd", 01:26:09.371 "config": [] 01:26:09.372 }, 01:26:09.372 { 01:26:09.372 "subsystem": "accel", 01:26:09.372 "config": [ 01:26:09.372 { 01:26:09.372 "method": "accel_set_options", 01:26:09.372 "params": { 01:26:09.372 "buf_count": 2048, 01:26:09.372 "large_cache_size": 16, 01:26:09.372 "sequence_count": 2048, 01:26:09.372 "small_cache_size": 128, 01:26:09.372 "task_count": 2048 01:26:09.372 } 01:26:09.372 } 01:26:09.372 ] 01:26:09.372 }, 01:26:09.372 { 01:26:09.372 "subsystem": "bdev", 01:26:09.372 "config": [ 01:26:09.372 { 01:26:09.372 "method": "bdev_set_options", 01:26:09.372 "params": { 01:26:09.372 "bdev_auto_examine": true, 01:26:09.372 "bdev_io_cache_size": 256, 01:26:09.372 "bdev_io_pool_size": 65535, 01:26:09.372 "iobuf_large_cache_size": 16, 01:26:09.372 "iobuf_small_cache_size": 128 01:26:09.372 } 01:26:09.372 }, 01:26:09.372 { 01:26:09.372 "method": "bdev_raid_set_options", 01:26:09.372 "params": { 01:26:09.372 "process_max_bandwidth_mb_sec": 0, 01:26:09.372 "process_window_size_kb": 1024 01:26:09.372 } 01:26:09.372 }, 01:26:09.372 { 01:26:09.372 "method": "bdev_iscsi_set_options", 01:26:09.372 "params": { 01:26:09.372 "timeout_sec": 30 01:26:09.372 } 01:26:09.372 }, 01:26:09.372 { 01:26:09.372 "method": "bdev_nvme_set_options", 01:26:09.372 "params": { 01:26:09.372 "action_on_timeout": "none", 01:26:09.372 "allow_accel_sequence": false, 01:26:09.372 "arbitration_burst": 0, 01:26:09.372 "bdev_retry_count": 3, 01:26:09.372 "ctrlr_loss_timeout_sec": 0, 01:26:09.372 "delay_cmd_submit": true, 01:26:09.372 "dhchap_dhgroups": [ 01:26:09.372 "null", 01:26:09.372 "ffdhe2048", 01:26:09.372 "ffdhe3072", 01:26:09.372 "ffdhe4096", 01:26:09.372 "ffdhe6144", 01:26:09.372 "ffdhe8192" 01:26:09.372 ], 01:26:09.372 "dhchap_digests": [ 01:26:09.372 "sha256", 01:26:09.372 "sha384", 01:26:09.372 "sha512" 01:26:09.372 ], 01:26:09.372 "disable_auto_failback": false, 01:26:09.372 "fast_io_fail_timeout_sec": 0, 01:26:09.372 "generate_uuids": false, 01:26:09.372 "high_priority_weight": 0, 01:26:09.372 "io_path_stat": false, 01:26:09.372 "io_queue_requests": 512, 01:26:09.372 "keep_alive_timeout_ms": 10000, 01:26:09.372 "low_priority_weight": 0, 01:26:09.372 "medium_priority_weight": 0, 01:26:09.372 "nvme_adminq_poll_period_us": 10000, 01:26:09.372 "nvme_error_stat": false, 01:26:09.372 "nvme_ioq_poll_period_us": 0, 01:26:09.372 "rdma_cm_event_timeout_ms": 0, 01:26:09.372 "rdma_max_cq_size": 0, 01:26:09.372 "rdma_srq_size": 0, 01:26:09.372 "reconnect_delay_sec": 0, 01:26:09.372 "timeout_admin_us": 0, 01:26:09.372 "timeout_us": 0, 01:26:09.372 "transport_ack_timeout": 0, 01:26:09.372 "transport_retry_count": 4, 01:26:09.372 "transport_tos": 0 01:26:09.372 } 01:26:09.372 }, 01:26:09.372 { 01:26:09.372 "method": "bdev_nvme_attach_controller", 01:26:09.372 "params": { 01:26:09.372 "adrfam": "IPv4", 01:26:09.372 "ctrlr_loss_timeout_sec": 0, 01:26:09.372 "ddgst": false, 01:26:09.372 "fast_io_fail_timeout_sec": 0, 01:26:09.372 "hdgst": false, 01:26:09.372 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:26:09.372 "multipath": "multipath", 01:26:09.372 "name": "nvme0", 01:26:09.372 "prchk_guard": false, 01:26:09.372 "prchk_reftag": false, 01:26:09.372 "psk": "key0", 01:26:09.372 "reconnect_delay_sec": 0, 01:26:09.372 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:26:09.372 "traddr": "10.0.0.3", 01:26:09.372 "trsvcid": "4420", 01:26:09.372 "trtype": "TCP" 01:26:09.372 } 01:26:09.372 }, 01:26:09.372 { 01:26:09.372 "method": "bdev_nvme_set_hotplug", 01:26:09.372 "params": { 01:26:09.372 "enable": false, 01:26:09.372 "period_us": 100000 01:26:09.372 } 01:26:09.372 }, 01:26:09.372 { 01:26:09.372 "method": "bdev_enable_histogram", 01:26:09.372 "params": { 01:26:09.372 "enable": true, 01:26:09.372 "name": "nvme0n1" 01:26:09.372 } 01:26:09.372 }, 01:26:09.372 { 01:26:09.372 "method": "bdev_wait_for_examine" 01:26:09.372 } 01:26:09.372 ] 01:26:09.372 }, 01:26:09.372 { 01:26:09.372 "subsystem": "nbd", 01:26:09.372 "config": [] 01:26:09.372 } 01:26:09.372 ] 01:26:09.372 }' 01:26:09.372 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 83426 01:26:09.372 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83426 ']' 01:26:09.372 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83426 01:26:09.372 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:26:09.372 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:09.372 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83426 01:26:09.372 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:26:09.372 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:26:09.372 killing process with pid 83426 01:26:09.372 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83426' 01:26:09.372 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83426 01:26:09.372 Received shutdown signal, test time was about 1.000000 seconds 01:26:09.372 01:26:09.372 Latency(us) 01:26:09.372 [2024-12-09T05:21:00.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:26:09.372 [2024-12-09T05:21:00.989Z] =================================================================================================================== 01:26:09.372 [2024-12-09T05:21:00.989Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:26:09.372 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83426 01:26:09.630 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 83377 01:26:09.630 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83377 ']' 01:26:09.630 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83377 01:26:09.630 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:26:09.630 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:09.630 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83377 01:26:09.630 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:09.631 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:09.631 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83377' 01:26:09.631 killing process with pid 83377 01:26:09.631 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83377 01:26:09.631 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83377 01:26:09.631 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 01:26:09.631 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:26:09.631 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:26:09.631 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:26:09.631 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 01:26:09.631 "subsystems": [ 01:26:09.631 { 01:26:09.631 "subsystem": "keyring", 01:26:09.631 "config": [ 01:26:09.631 { 01:26:09.631 "method": "keyring_file_add_key", 01:26:09.631 "params": { 01:26:09.631 "name": "key0", 01:26:09.631 "path": "/tmp/tmp.oT7ReP1J2E" 01:26:09.631 } 01:26:09.631 } 01:26:09.631 ] 01:26:09.631 }, 01:26:09.631 { 01:26:09.631 "subsystem": "iobuf", 01:26:09.631 "config": [ 01:26:09.631 { 01:26:09.631 "method": "iobuf_set_options", 01:26:09.631 "params": { 01:26:09.631 "enable_numa": false, 01:26:09.631 "large_bufsize": 135168, 01:26:09.631 "large_pool_count": 1024, 01:26:09.631 "small_bufsize": 8192, 01:26:09.631 "small_pool_count": 8192 01:26:09.631 } 01:26:09.631 } 01:26:09.631 ] 01:26:09.631 }, 01:26:09.631 { 01:26:09.631 "subsystem": "sock", 01:26:09.631 "config": [ 01:26:09.631 { 01:26:09.631 "method": "sock_set_default_impl", 01:26:09.631 "params": { 01:26:09.631 "impl_name": "posix" 01:26:09.631 } 01:26:09.631 }, 01:26:09.631 { 01:26:09.631 "method": "sock_impl_set_options", 01:26:09.631 "params": { 01:26:09.631 "enable_ktls": false, 01:26:09.631 "enable_placement_id": 0, 01:26:09.631 "enable_quickack": false, 01:26:09.631 "enable_recv_pipe": true, 01:26:09.631 "enable_zerocopy_send_client": false, 01:26:09.631 "enable_zerocopy_send_server": true, 01:26:09.631 "impl_name": "ssl", 01:26:09.631 "recv_buf_size": 4096, 01:26:09.631 "send_buf_size": 4096, 01:26:09.631 "tls_version": 0, 01:26:09.631 "zerocopy_threshold": 0 01:26:09.631 } 01:26:09.631 }, 01:26:09.631 { 01:26:09.631 "method": "sock_impl_set_options", 01:26:09.631 "params": { 01:26:09.631 "enable_ktls": false, 01:26:09.631 "enable_placement_id": 0, 01:26:09.631 "enable_quickack": false, 01:26:09.631 "enable_recv_pipe": true, 01:26:09.631 "enable_zerocopy_send_client": false, 01:26:09.631 "enable_zerocopy_send_server": true, 01:26:09.631 "impl_name": "posix", 01:26:09.631 "recv_buf_size": 2097152, 01:26:09.631 "send_buf_size": 2097152, 01:26:09.631 "tls_version": 0, 01:26:09.631 "zerocopy_threshold": 0 01:26:09.631 } 01:26:09.631 } 01:26:09.631 ] 01:26:09.631 }, 01:26:09.631 { 01:26:09.631 "subsystem": "vmd", 01:26:09.631 "config": [] 01:26:09.631 }, 01:26:09.631 { 01:26:09.631 "subsystem": "accel", 01:26:09.631 "config": [ 01:26:09.631 { 01:26:09.631 "method": "accel_set_options", 01:26:09.631 "params": { 01:26:09.631 "buf_count": 2048, 01:26:09.631 "large_cache_size": 16, 01:26:09.631 "sequence_count": 2048, 01:26:09.631 "small_cache_size": 128, 01:26:09.631 "task_count": 2048 01:26:09.631 } 01:26:09.631 } 01:26:09.631 ] 01:26:09.631 }, 01:26:09.631 { 01:26:09.631 "subsystem": "bdev", 01:26:09.631 "config": [ 01:26:09.631 { 01:26:09.631 "method": "bdev_set_options", 01:26:09.631 "params": { 01:26:09.631 "bdev_auto_examine": true, 01:26:09.631 "bdev_io_cache_size": 256, 01:26:09.631 "bdev_io_pool_size": 65535, 01:26:09.631 "iobuf_large_cache_size": 16, 01:26:09.631 "iobuf_small_cache_size": 128 01:26:09.631 } 01:26:09.631 }, 01:26:09.631 { 01:26:09.631 "method": "bdev_raid_set_options", 01:26:09.631 "params": { 01:26:09.631 "process_max_bandwidth_mb_sec": 0, 01:26:09.631 "process_window_size_kb": 1024 01:26:09.631 } 01:26:09.631 }, 01:26:09.631 { 01:26:09.631 "method": "bdev_iscsi_set_options", 01:26:09.631 "params": { 01:26:09.631 "timeout_sec": 30 01:26:09.631 } 01:26:09.631 }, 01:26:09.631 { 01:26:09.631 "method": "bdev_nvme_set_options", 01:26:09.631 "params": { 01:26:09.631 "action_on_timeout": "none", 01:26:09.631 "allow_accel_sequence": false, 01:26:09.631 "arbitration_burst": 0, 01:26:09.631 "bdev_retry_count": 3, 01:26:09.631 "ctrlr_loss_timeout_sec": 0, 01:26:09.631 "delay_cmd_submit": true, 01:26:09.631 "dhchap_dhgroups": [ 01:26:09.631 "null", 01:26:09.631 "ffdhe2048", 01:26:09.631 "ffdhe3072", 01:26:09.631 "ffdhe4096", 01:26:09.631 "ffdhe6144", 01:26:09.631 "ffdhe8192" 01:26:09.631 ], 01:26:09.631 "dhchap_digests": [ 01:26:09.631 "sha256", 01:26:09.631 "sha384", 01:26:09.631 "sha512" 01:26:09.631 ], 01:26:09.631 "disable_auto_failback": false, 01:26:09.631 "fast_io_fail_timeout_sec": 0, 01:26:09.631 "generate_uuids": false, 01:26:09.631 "high_priority_weight": 0, 01:26:09.631 "io_path_stat": false, 01:26:09.631 "io_queue_requests": 0, 01:26:09.631 "keep_alive_timeout_ms": 10000, 01:26:09.631 "low_priority_weight": 0, 01:26:09.631 "medium_priority_weight": 0, 01:26:09.631 "nvme_adminq_poll_period_us": 10000, 01:26:09.631 "nvme_error_stat": false, 01:26:09.631 "nvme_ioq_poll_period_us": 0, 01:26:09.631 "rdma_cm_event_timeout_ms": 0, 01:26:09.631 "rdma_max_cq_size": 0, 01:26:09.631 "rdma_srq_size": 0, 01:26:09.631 "reconnect_delay_sec": 0, 01:26:09.631 "timeout_admin_us": 0, 01:26:09.631 "timeout_us": 0, 01:26:09.631 "transport_ack_timeout": 0, 01:26:09.631 "transport_retry_count": 4, 01:26:09.631 "transport_tos": 0 01:26:09.631 } 01:26:09.631 }, 01:26:09.631 { 01:26:09.631 "method": "bdev_nvme_set_hotplug", 01:26:09.631 "params": { 01:26:09.631 "enable": false, 01:26:09.631 "period_us": 100000 01:26:09.631 } 01:26:09.631 }, 01:26:09.631 { 01:26:09.631 "method": "bdev_malloc_create", 01:26:09.631 "params": { 01:26:09.631 "block_size": 4096, 01:26:09.631 "dif_is_head_of_md": false, 01:26:09.631 "dif_pi_format": 0, 01:26:09.631 "dif_type": 0, 01:26:09.631 "md_size": 0, 01:26:09.631 "name": "malloc0", 01:26:09.631 "num_blocks": 8192, 01:26:09.631 "optimal_io_boundary": 0, 01:26:09.631 "physical_block_size": 4096, 01:26:09.631 "uuid": "54c4918a-6f2d-459b-9acc-dc07274db174" 01:26:09.631 } 01:26:09.631 }, 01:26:09.631 { 01:26:09.631 "method": "bdev_wait_for_examine" 01:26:09.631 } 01:26:09.631 ] 01:26:09.631 }, 01:26:09.631 { 01:26:09.631 "subsystem": "nbd", 01:26:09.631 "config": [] 01:26:09.631 }, 01:26:09.631 { 01:26:09.631 "subsystem": "scheduler", 01:26:09.631 "config": [ 01:26:09.631 { 01:26:09.631 "method": "framework_set_scheduler", 01:26:09.631 "params": { 01:26:09.631 "name": "static" 01:26:09.631 } 01:26:09.631 } 01:26:09.631 ] 01:26:09.631 }, 01:26:09.631 { 01:26:09.631 "subsystem": "nvmf", 01:26:09.631 "config": [ 01:26:09.631 { 01:26:09.631 "method": "nvmf_set_config", 01:26:09.631 "params": { 01:26:09.631 "admin_cmd_passthru": { 01:26:09.631 "identify_ctrlr": false 01:26:09.631 }, 01:26:09.631 "dhchap_dhgroups": [ 01:26:09.631 "null", 01:26:09.631 "ffdhe2048", 01:26:09.631 "ffdhe3072", 01:26:09.631 "ffdhe4096", 01:26:09.631 "ffdhe6144", 01:26:09.631 "ffdhe8192" 01:26:09.631 ], 01:26:09.631 "dhchap_digests": [ 01:26:09.631 "sha256", 01:26:09.631 "sha384", 01:26:09.631 "sha512" 01:26:09.631 ], 01:26:09.631 "discovery_filter": "match_any" 01:26:09.631 } 01:26:09.631 }, 01:26:09.631 { 01:26:09.631 "method": "nvmf_set_max_subsystems", 01:26:09.631 "params": { 01:26:09.631 "max_subsystems": 1024 01:26:09.631 } 01:26:09.631 }, 01:26:09.631 { 01:26:09.631 "method": "nvmf_set_crdt", 01:26:09.631 "params": { 01:26:09.631 "crdt1": 0, 01:26:09.631 "crdt2": 0, 01:26:09.631 "crdt3": 0 01:26:09.631 } 01:26:09.631 }, 01:26:09.631 { 01:26:09.631 "method": "nvmf_create_transport", 01:26:09.631 "params": { 01:26:09.631 "abort_timeout_sec": 1, 01:26:09.631 "ack_timeout": 0, 01:26:09.631 "buf_cache_size": 4294967295, 01:26:09.631 "c2h_success": false, 01:26:09.631 "data_wr_pool_size": 0, 01:26:09.631 "dif_insert_or_strip": false, 01:26:09.631 "in_capsule_data_size": 4096, 01:26:09.631 "io_unit_size": 131072, 01:26:09.631 "max_aq_depth": 128, 01:26:09.631 "max_io_qpairs_per_ctrlr": 127, 01:26:09.631 "max_io_size": 131072, 01:26:09.631 "max_queue_depth": 128, 01:26:09.631 "num_shared_buffers": 511, 01:26:09.631 "sock_priority": 0, 01:26:09.632 "trtype": "TCP", 01:26:09.632 "zcopy": false 01:26:09.632 } 01:26:09.632 }, 01:26:09.632 { 01:26:09.632 "method": "nvmf_create_subsystem", 01:26:09.632 "params": { 01:26:09.632 "allow_any_host": false, 01:26:09.632 "ana_reporting": false, 01:26:09.632 "max_cntlid": 65519, 01:26:09.632 "max_namespaces": 32, 01:26:09.632 "min_cntlid": 1, 01:26:09.632 "model_number": "SPDK bdev Controller", 01:26:09.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:26:09.632 "serial_number": "00000000000000000000" 01:26:09.632 } 01:26:09.632 }, 01:26:09.632 { 01:26:09.632 "method": "nvmf_subsystem_add_host", 01:26:09.632 "params": { 01:26:09.632 "host": "nqn.2016-06.io.spdk:host1", 01:26:09.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:26:09.632 "psk": "key0" 01:26:09.632 } 01:26:09.632 }, 01:26:09.632 { 01:26:09.632 "method": "nvmf_subsystem_add_ns", 01:26:09.632 "params": { 01:26:09.632 "namespace": { 01:26:09.632 "bdev_name": "malloc0", 01:26:09.632 "nguid": "54C4918A6F2D459B9ACCDC07274DB174", 01:26:09.632 "no_auto_visible": false, 01:26:09.632 "nsid": 1, 01:26:09.632 "uuid": "54c4918a-6f2d-459b-9acc-dc07274db174" 01:26:09.632 }, 01:26:09.632 "nqn": "nqn.2016-06.io.spdk:cnode1" 01:26:09.632 } 01:26:09.632 }, 01:26:09.632 { 01:26:09.632 "method": "nvmf_subsystem_add_listener", 01:26:09.632 "params": { 01:26:09.632 "listen_address": { 01:26:09.632 "adrfam": "IPv4", 01:26:09.632 "traddr": "10.0.0.3", 01:26:09.632 "trsvcid": "4420", 01:26:09.632 "trtype": "TCP" 01:26:09.632 }, 01:26:09.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:26:09.632 "secure_channel": false, 01:26:09.632 "sock_impl": "ssl" 01:26:09.632 } 01:26:09.632 } 01:26:09.632 ] 01:26:09.632 } 01:26:09.632 ] 01:26:09.632 }' 01:26:09.632 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83507 01:26:09.632 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83507 01:26:09.632 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83507 ']' 01:26:09.632 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:09.632 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 01:26:09.632 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:09.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:09.632 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:09.632 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:09.632 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:26:09.632 [2024-12-09 05:21:01.237540] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:09.632 [2024-12-09 05:21:01.237610] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:26:09.889 [2024-12-09 05:21:01.373636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:09.889 [2024-12-09 05:21:01.403549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:26:09.889 [2024-12-09 05:21:01.403593] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:26:09.889 [2024-12-09 05:21:01.403599] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:26:09.889 [2024-12-09 05:21:01.403603] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:26:09.889 [2024-12-09 05:21:01.403606] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:26:09.889 [2024-12-09 05:21:01.403836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:10.147 [2024-12-09 05:21:01.597067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:26:10.147 [2024-12-09 05:21:01.629021] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:26:10.147 [2024-12-09 05:21:01.629149] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:26:10.765 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:10.765 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:26:10.765 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:26:10.765 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:26:10.765 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:26:10.765 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:26:10.765 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=83551 01:26:10.765 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 83551 /var/tmp/bdevperf.sock 01:26:10.765 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83551 ']' 01:26:10.765 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:26:10.765 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:10.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:26:10.765 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:26:10.765 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:10.765 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:26:10.765 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 01:26:10.765 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 01:26:10.765 "subsystems": [ 01:26:10.765 { 01:26:10.765 "subsystem": "keyring", 01:26:10.765 "config": [ 01:26:10.765 { 01:26:10.765 "method": "keyring_file_add_key", 01:26:10.765 "params": { 01:26:10.765 "name": "key0", 01:26:10.765 "path": "/tmp/tmp.oT7ReP1J2E" 01:26:10.765 } 01:26:10.765 } 01:26:10.765 ] 01:26:10.765 }, 01:26:10.765 { 01:26:10.765 "subsystem": "iobuf", 01:26:10.765 "config": [ 01:26:10.765 { 01:26:10.765 "method": "iobuf_set_options", 01:26:10.765 "params": { 01:26:10.765 "enable_numa": false, 01:26:10.765 "large_bufsize": 135168, 01:26:10.765 "large_pool_count": 1024, 01:26:10.765 "small_bufsize": 8192, 01:26:10.765 "small_pool_count": 8192 01:26:10.765 } 01:26:10.765 } 01:26:10.765 ] 01:26:10.765 }, 01:26:10.765 { 01:26:10.765 "subsystem": "sock", 01:26:10.765 "config": [ 01:26:10.765 { 01:26:10.765 "method": "sock_set_default_impl", 01:26:10.765 "params": { 01:26:10.765 "impl_name": "posix" 01:26:10.765 } 01:26:10.765 }, 01:26:10.765 { 01:26:10.765 "method": "sock_impl_set_options", 01:26:10.765 "params": { 01:26:10.765 "enable_ktls": false, 01:26:10.765 "enable_placement_id": 0, 01:26:10.765 "enable_quickack": false, 01:26:10.765 "enable_recv_pipe": true, 01:26:10.765 "enable_zerocopy_send_client": false, 01:26:10.765 "enable_zerocopy_send_server": true, 01:26:10.765 "impl_name": "ssl", 01:26:10.765 "recv_buf_size": 4096, 01:26:10.765 "send_buf_size": 4096, 01:26:10.765 "tls_version": 0, 01:26:10.765 "zerocopy_threshold": 0 01:26:10.765 } 01:26:10.765 }, 01:26:10.765 { 01:26:10.765 "method": "sock_impl_set_options", 01:26:10.765 "params": { 01:26:10.765 "enable_ktls": false, 01:26:10.765 "enable_placement_id": 0, 01:26:10.765 "enable_quickack": false, 01:26:10.765 "enable_recv_pipe": true, 01:26:10.765 "enable_zerocopy_send_client": false, 01:26:10.765 "enable_zerocopy_send_server": true, 01:26:10.765 "impl_name": "posix", 01:26:10.765 "recv_buf_size": 2097152, 01:26:10.765 "send_buf_size": 2097152, 01:26:10.765 "tls_version": 0, 01:26:10.765 "zerocopy_threshold": 0 01:26:10.765 } 01:26:10.765 } 01:26:10.765 ] 01:26:10.765 }, 01:26:10.765 { 01:26:10.765 "subsystem": "vmd", 01:26:10.765 "config": [] 01:26:10.765 }, 01:26:10.765 { 01:26:10.765 "subsystem": "accel", 01:26:10.765 "config": [ 01:26:10.765 { 01:26:10.765 "method": "accel_set_options", 01:26:10.765 "params": { 01:26:10.765 "buf_count": 2048, 01:26:10.765 "large_cache_size": 16, 01:26:10.765 "sequence_count": 2048, 01:26:10.765 "small_cache_size": 128, 01:26:10.765 "task_count": 2048 01:26:10.765 } 01:26:10.765 } 01:26:10.765 ] 01:26:10.765 }, 01:26:10.765 { 01:26:10.765 "subsystem": "bdev", 01:26:10.765 "config": [ 01:26:10.765 { 01:26:10.765 "method": "bdev_set_options", 01:26:10.765 "params": { 01:26:10.765 "bdev_auto_examine": true, 01:26:10.765 "bdev_io_cache_size": 256, 01:26:10.765 "bdev_io_pool_size": 65535, 01:26:10.765 "iobuf_large_cache_size": 16, 01:26:10.765 "iobuf_small_cache_size": 128 01:26:10.765 } 01:26:10.765 }, 01:26:10.765 { 01:26:10.765 "method": "bdev_raid_set_options", 01:26:10.765 "params": { 01:26:10.765 "process_max_bandwidth_mb_sec": 0, 01:26:10.765 "process_window_size_kb": 1024 01:26:10.765 } 01:26:10.765 }, 01:26:10.765 { 01:26:10.765 "method": "bdev_iscsi_set_options", 01:26:10.765 "params": { 01:26:10.765 "timeout_sec": 30 01:26:10.765 } 01:26:10.765 }, 01:26:10.765 { 01:26:10.765 "method": "bdev_nvme_set_options", 01:26:10.765 "params": { 01:26:10.765 "action_on_timeout": "none", 01:26:10.765 "allow_accel_sequence": false, 01:26:10.765 "arbitration_burst": 0, 01:26:10.765 "bdev_retry_count": 3, 01:26:10.765 "ctrlr_loss_timeout_sec": 0, 01:26:10.765 "delay_cmd_submit": true, 01:26:10.765 "dhchap_dhgroups": [ 01:26:10.765 "null", 01:26:10.765 "ffdhe2048", 01:26:10.765 "ffdhe3072", 01:26:10.765 "ffdhe4096", 01:26:10.765 "ffdhe6144", 01:26:10.766 "ffdhe8192" 01:26:10.766 ], 01:26:10.766 "dhchap_digests": [ 01:26:10.766 "sha256", 01:26:10.766 "sha384", 01:26:10.766 "sha512" 01:26:10.766 ], 01:26:10.766 "disable_auto_failback": false, 01:26:10.766 "fast_io_fail_timeout_sec": 0, 01:26:10.766 "generate_uuids": false, 01:26:10.766 "high_priority_weight": 0, 01:26:10.766 "io_path_stat": false, 01:26:10.766 "io_queue_requests": 512, 01:26:10.766 "keep_alive_timeout_ms": 10000, 01:26:10.766 "low_priority_weight": 0, 01:26:10.766 "medium_priority_weight": 0, 01:26:10.766 "nvme_adminq_poll_period_us": 10000, 01:26:10.766 "nvme_error_stat": false, 01:26:10.766 "nvme_ioq_poll_period_us": 0, 01:26:10.766 "rdma_cm_event_timeout_ms": 0, 01:26:10.766 "rdma_max_cq_size": 0, 01:26:10.766 "rdma_srq_size": 0, 01:26:10.766 "reconnect_delay_sec": 0, 01:26:10.766 "timeout_admin_us": 0, 01:26:10.766 "timeout_us": 0, 01:26:10.766 "transport_ack_timeout": 0, 01:26:10.766 "transport_retry_count": 4, 01:26:10.766 "transport_tos": 0 01:26:10.766 } 01:26:10.766 }, 01:26:10.766 { 01:26:10.766 "method": "bdev_nvme_attach_controller", 01:26:10.766 "params": { 01:26:10.766 "adrfam": "IPv4", 01:26:10.766 "ctrlr_loss_timeout_sec": 0, 01:26:10.766 "ddgst": false, 01:26:10.766 "fast_io_fail_timeout_sec": 0, 01:26:10.766 "hdgst": false, 01:26:10.766 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:26:10.766 "multipath": "multipath", 01:26:10.766 "name": "nvme0", 01:26:10.766 "prchk_guard": false, 01:26:10.766 "prchk_reftag": false, 01:26:10.766 "psk": "key0", 01:26:10.766 "reconnect_delay_sec": 0, 01:26:10.766 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:26:10.766 "traddr": "10.0.0.3", 01:26:10.766 "trsvcid": "4420", 01:26:10.766 "trtype": "TCP" 01:26:10.766 } 01:26:10.766 }, 01:26:10.766 { 01:26:10.766 "method": "bdev_nvme_set_hotplug", 01:26:10.766 "params": { 01:26:10.766 "enable": false, 01:26:10.766 "period_us": 100000 01:26:10.766 } 01:26:10.766 }, 01:26:10.766 { 01:26:10.766 "method": "bdev_enable_histogram", 01:26:10.766 "params": { 01:26:10.766 "enable": true, 01:26:10.766 "name": "nvme0n1" 01:26:10.766 } 01:26:10.766 }, 01:26:10.766 { 01:26:10.766 "method": "bdev_wait_for_examine" 01:26:10.766 } 01:26:10.766 ] 01:26:10.766 }, 01:26:10.766 { 01:26:10.766 "subsystem": "nbd", 01:26:10.766 "config": [] 01:26:10.766 } 01:26:10.766 ] 01:26:10.766 }' 01:26:10.766 [2024-12-09 05:21:02.163078] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:10.766 [2024-12-09 05:21:02.163136] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83551 ] 01:26:10.766 [2024-12-09 05:21:02.293973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:10.766 [2024-12-09 05:21:02.328739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:26:11.024 [2024-12-09 05:21:02.466180] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:26:11.589 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:11.589 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:26:11.589 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 01:26:11.589 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:26:11.847 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:11.847 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:26:11.847 Running I/O for 1 seconds... 01:26:12.780 7341.00 IOPS, 28.68 MiB/s 01:26:12.780 Latency(us) 01:26:12.780 [2024-12-09T05:21:04.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:26:12.780 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:26:12.780 Verification LBA range: start 0x0 length 0x2000 01:26:12.780 nvme0n1 : 1.01 7402.56 28.92 0.00 0.00 17191.55 3352.42 13208.02 01:26:12.780 [2024-12-09T05:21:04.397Z] =================================================================================================================== 01:26:12.780 [2024-12-09T05:21:04.397Z] Total : 7402.56 28.92 0.00 0.00 17191.55 3352.42 13208.02 01:26:12.780 { 01:26:12.780 "results": [ 01:26:12.780 { 01:26:12.780 "job": "nvme0n1", 01:26:12.780 "core_mask": "0x2", 01:26:12.780 "workload": "verify", 01:26:12.780 "status": "finished", 01:26:12.780 "verify_range": { 01:26:12.780 "start": 0, 01:26:12.780 "length": 8192 01:26:12.780 }, 01:26:12.780 "queue_depth": 128, 01:26:12.780 "io_size": 4096, 01:26:12.780 "runtime": 1.008975, 01:26:12.780 "iops": 7402.562005996184, 01:26:12.780 "mibps": 28.916257835922593, 01:26:12.780 "io_failed": 0, 01:26:12.780 "io_timeout": 0, 01:26:12.780 "avg_latency_us": 17191.54860809294, 01:26:12.780 "min_latency_us": 3352.4184615384615, 01:26:12.780 "max_latency_us": 13208.024615384615 01:26:12.780 } 01:26:12.780 ], 01:26:12.780 "core_count": 1 01:26:12.780 } 01:26:12.780 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 01:26:12.780 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 01:26:12.780 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 01:26:12.780 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 01:26:12.780 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 01:26:12.780 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 01:26:12.780 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:26:12.780 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 01:26:12.780 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 01:26:12.780 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 01:26:12.780 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:26:12.780 nvmf_trace.0 01:26:13.038 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 01:26:13.038 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 83551 01:26:13.038 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83551 ']' 01:26:13.038 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83551 01:26:13.039 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:26:13.039 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:13.039 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83551 01:26:13.039 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:26:13.039 killing process with pid 83551 01:26:13.039 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:26:13.039 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83551' 01:26:13.039 Received shutdown signal, test time was about 1.000000 seconds 01:26:13.039 01:26:13.039 Latency(us) 01:26:13.039 [2024-12-09T05:21:04.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:26:13.039 [2024-12-09T05:21:04.656Z] =================================================================================================================== 01:26:13.039 [2024-12-09T05:21:04.656Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:26:13.039 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83551 01:26:13.039 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83551 01:26:13.039 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 01:26:13.039 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 01:26:13.039 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 01:26:13.039 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:26:13.039 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 01:26:13.039 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 01:26:13.039 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:26:13.039 rmmod nvme_tcp 01:26:13.039 rmmod nvme_fabrics 01:26:13.296 rmmod nvme_keyring 01:26:13.296 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:26:13.296 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 01:26:13.296 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 01:26:13.296 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 83507 ']' 01:26:13.296 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 83507 01:26:13.296 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83507 ']' 01:26:13.296 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83507 01:26:13.296 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:26:13.296 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:13.296 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83507 01:26:13.296 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:13.296 killing process with pid 83507 01:26:13.296 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:13.296 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83507' 01:26:13.296 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83507 01:26:13.296 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83507 01:26:13.296 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:26:13.296 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:26:13.296 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:26:13.296 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 01:26:13.297 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 01:26:13.297 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:26:13.297 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 01:26:13.297 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:26:13.297 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:26:13.297 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:26:13.297 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:26:13.297 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:26:13.297 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:26:13.553 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:26:13.553 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:26:13.553 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:26:13.553 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:26:13.553 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:26:13.553 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:26:13.553 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:26:13.553 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:13.553 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:13.554 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 01:26:13.554 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:13.554 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:13.554 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:13.554 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 01:26:13.554 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.QqE336F7ZU /tmp/tmp.4HdN4MLDuM /tmp/tmp.oT7ReP1J2E 01:26:13.554 01:26:13.554 real 1m21.212s 01:26:13.554 user 2m14.401s 01:26:13.554 sys 0m21.911s 01:26:13.554 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:13.554 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:26:13.554 ************************************ 01:26:13.554 END TEST nvmf_tls 01:26:13.554 ************************************ 01:26:13.554 05:21:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 01:26:13.554 05:21:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:26:13.554 05:21:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:13.554 05:21:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:26:13.554 ************************************ 01:26:13.554 START TEST nvmf_fips 01:26:13.554 ************************************ 01:26:13.554 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 01:26:13.811 * Looking for test storage... 01:26:13.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:26:13.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:13.811 --rc genhtml_branch_coverage=1 01:26:13.811 --rc genhtml_function_coverage=1 01:26:13.811 --rc genhtml_legend=1 01:26:13.811 --rc geninfo_all_blocks=1 01:26:13.811 --rc geninfo_unexecuted_blocks=1 01:26:13.811 01:26:13.811 ' 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:26:13.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:13.811 --rc genhtml_branch_coverage=1 01:26:13.811 --rc genhtml_function_coverage=1 01:26:13.811 --rc genhtml_legend=1 01:26:13.811 --rc geninfo_all_blocks=1 01:26:13.811 --rc geninfo_unexecuted_blocks=1 01:26:13.811 01:26:13.811 ' 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:26:13.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:13.811 --rc genhtml_branch_coverage=1 01:26:13.811 --rc genhtml_function_coverage=1 01:26:13.811 --rc genhtml_legend=1 01:26:13.811 --rc geninfo_all_blocks=1 01:26:13.811 --rc geninfo_unexecuted_blocks=1 01:26:13.811 01:26:13.811 ' 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:26:13.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:13.811 --rc genhtml_branch_coverage=1 01:26:13.811 --rc genhtml_function_coverage=1 01:26:13.811 --rc genhtml_legend=1 01:26:13.811 --rc geninfo_all_blocks=1 01:26:13.811 --rc geninfo_unexecuted_blocks=1 01:26:13.811 01:26:13.811 ' 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:26:13.811 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:26:13.812 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 01:26:13.812 Error setting digest 01:26:13.812 408222D6B07F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 01:26:13.812 408222D6B07F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 01:26:13.812 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:26:14.070 Cannot find device "nvmf_init_br" 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:26:14.070 Cannot find device "nvmf_init_br2" 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:26:14.070 Cannot find device "nvmf_tgt_br" 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:26:14.070 Cannot find device "nvmf_tgt_br2" 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:26:14.070 Cannot find device "nvmf_init_br" 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:26:14.070 Cannot find device "nvmf_init_br2" 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:26:14.070 Cannot find device "nvmf_tgt_br" 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:26:14.070 Cannot find device "nvmf_tgt_br2" 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:26:14.070 Cannot find device "nvmf_br" 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:26:14.070 Cannot find device "nvmf_init_if" 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:26:14.070 Cannot find device "nvmf_init_if2" 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:14.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:14.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:26:14.070 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:26:14.328 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:26:14.328 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 01:26:14.328 01:26:14.328 --- 10.0.0.3 ping statistics --- 01:26:14.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:14.328 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:26:14.328 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:26:14.328 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 01:26:14.328 01:26:14.328 --- 10.0.0.4 ping statistics --- 01:26:14.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:14.328 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:26:14.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:26:14.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 01:26:14.328 01:26:14.328 --- 10.0.0.1 ping statistics --- 01:26:14.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:14.328 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:26:14.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:26:14.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 01:26:14.328 01:26:14.328 --- 10.0.0.2 ping statistics --- 01:26:14.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:14.328 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:26:14.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=83877 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 83877 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 83877 ']' 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:14.328 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:26:14.328 [2024-12-09 05:21:05.788327] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:14.328 [2024-12-09 05:21:05.788400] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:26:14.328 [2024-12-09 05:21:05.932661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:14.585 [2024-12-09 05:21:05.972669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:26:14.585 [2024-12-09 05:21:05.972701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:26:14.585 [2024-12-09 05:21:05.972707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:26:14.585 [2024-12-09 05:21:05.972712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:26:14.585 [2024-12-09 05:21:05.972717] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:26:14.585 [2024-12-09 05:21:05.972960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:26:15.148 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:15.148 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 01:26:15.148 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:26:15.148 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 01:26:15.149 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:26:15.149 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:26:15.149 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 01:26:15.149 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 01:26:15.149 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 01:26:15.149 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.tK7 01:26:15.149 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 01:26:15.149 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.tK7 01:26:15.149 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.tK7 01:26:15.149 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.tK7 01:26:15.149 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:26:15.405 [2024-12-09 05:21:06.899170] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:26:15.405 [2024-12-09 05:21:06.915122] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:26:15.405 [2024-12-09 05:21:06.915254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:26:15.405 malloc0 01:26:15.405 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:26:15.405 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=83937 01:26:15.405 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:26:15.405 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 83937 /var/tmp/bdevperf.sock 01:26:15.405 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 83937 ']' 01:26:15.405 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:26:15.405 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:15.405 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:26:15.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:26:15.405 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:15.405 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:26:15.405 [2024-12-09 05:21:07.018871] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:15.406 [2024-12-09 05:21:07.019022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83937 ] 01:26:15.663 [2024-12-09 05:21:07.158846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:15.663 [2024-12-09 05:21:07.194290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:26:16.228 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:16.228 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 01:26:16.228 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.tK7 01:26:16.485 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:26:16.743 [2024-12-09 05:21:08.240139] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:26:16.743 TLSTESTn1 01:26:16.743 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:26:17.001 Running I/O for 10 seconds... 01:26:18.867 6084.00 IOPS, 23.77 MiB/s [2024-12-09T05:21:11.854Z] 6702.50 IOPS, 26.18 MiB/s [2024-12-09T05:21:12.420Z] 6908.33 IOPS, 26.99 MiB/s [2024-12-09T05:21:13.791Z] 7017.75 IOPS, 27.41 MiB/s [2024-12-09T05:21:14.722Z] 7084.60 IOPS, 27.67 MiB/s [2024-12-09T05:21:15.652Z] 7127.00 IOPS, 27.84 MiB/s [2024-12-09T05:21:16.585Z] 7158.86 IOPS, 27.96 MiB/s [2024-12-09T05:21:17.518Z] 7181.38 IOPS, 28.05 MiB/s [2024-12-09T05:21:18.450Z] 7200.11 IOPS, 28.13 MiB/s [2024-12-09T05:21:18.450Z] 7212.10 IOPS, 28.17 MiB/s 01:26:26.833 Latency(us) 01:26:26.833 [2024-12-09T05:21:18.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:26:26.833 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:26:26.833 Verification LBA range: start 0x0 length 0x2000 01:26:26.833 TLSTESTn1 : 10.01 7218.43 28.20 0.00 0.00 17704.51 3276.80 17845.96 01:26:26.833 [2024-12-09T05:21:18.450Z] =================================================================================================================== 01:26:26.833 [2024-12-09T05:21:18.450Z] Total : 7218.43 28.20 0.00 0.00 17704.51 3276.80 17845.96 01:26:26.833 { 01:26:26.833 "results": [ 01:26:26.833 { 01:26:26.833 "job": "TLSTESTn1", 01:26:26.833 "core_mask": "0x4", 01:26:26.833 "workload": "verify", 01:26:26.833 "status": "finished", 01:26:26.833 "verify_range": { 01:26:26.833 "start": 0, 01:26:26.833 "length": 8192 01:26:26.833 }, 01:26:26.833 "queue_depth": 128, 01:26:26.833 "io_size": 4096, 01:26:26.833 "runtime": 10.008967, 01:26:26.833 "iops": 7218.427236297212, 01:26:26.833 "mibps": 28.196981391785986, 01:26:26.833 "io_failed": 0, 01:26:26.833 "io_timeout": 0, 01:26:26.833 "avg_latency_us": 17704.50710063594, 01:26:26.833 "min_latency_us": 3276.8, 01:26:26.833 "max_latency_us": 17845.956923076923 01:26:26.833 } 01:26:26.833 ], 01:26:26.833 "core_count": 1 01:26:26.833 } 01:26:26.833 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 01:26:26.833 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 01:26:26.833 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 01:26:26.834 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 01:26:26.834 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 01:26:27.091 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:26:27.091 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 01:26:27.091 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 01:26:27.091 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 01:26:27.091 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:26:27.091 nvmf_trace.0 01:26:27.091 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 01:26:27.091 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 83937 01:26:27.091 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 83937 ']' 01:26:27.091 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 83937 01:26:27.091 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 01:26:27.091 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:27.091 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83937 01:26:27.091 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:26:27.091 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:26:27.091 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83937' 01:26:27.091 killing process with pid 83937 01:26:27.091 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 83937 01:26:27.091 Received shutdown signal, test time was about 10.000000 seconds 01:26:27.091 01:26:27.091 Latency(us) 01:26:27.091 [2024-12-09T05:21:18.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:26:27.091 [2024-12-09T05:21:18.708Z] =================================================================================================================== 01:26:27.091 [2024-12-09T05:21:18.708Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:26:27.091 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 83937 01:26:27.091 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 01:26:27.091 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 01:26:27.091 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 01:26:27.351 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:26:27.351 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 01:26:27.351 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 01:26:27.351 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:26:27.351 rmmod nvme_tcp 01:26:27.351 rmmod nvme_fabrics 01:26:27.351 rmmod nvme_keyring 01:26:27.351 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:26:27.351 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 83877 ']' 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 83877 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 83877 ']' 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 83877 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83877 01:26:27.352 killing process with pid 83877 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83877' 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 83877 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 83877 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:26:27.352 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:26:27.653 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:26:27.653 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:26:27.653 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:26:27.653 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:26:27.653 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:26:27.653 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:26:27.653 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:26:27.653 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:26:27.653 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:27.653 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:27.653 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 01:26:27.653 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:27.653 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:27.653 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:27.653 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 01:26:27.653 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.tK7 01:26:27.653 ************************************ 01:26:27.653 END TEST nvmf_fips 01:26:27.653 ************************************ 01:26:27.653 01:26:27.653 real 0m14.022s 01:26:27.653 user 0m20.518s 01:26:27.653 sys 0m4.565s 01:26:27.653 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:27.653 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:26:27.653 05:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 01:26:27.653 05:21:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:26:27.653 05:21:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:27.653 05:21:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:26:27.653 ************************************ 01:26:27.653 START TEST nvmf_control_msg_list 01:26:27.653 ************************************ 01:26:27.653 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 01:26:27.922 * Looking for test storage... 01:26:27.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:26:27.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:27.922 --rc genhtml_branch_coverage=1 01:26:27.922 --rc genhtml_function_coverage=1 01:26:27.922 --rc genhtml_legend=1 01:26:27.922 --rc geninfo_all_blocks=1 01:26:27.922 --rc geninfo_unexecuted_blocks=1 01:26:27.922 01:26:27.922 ' 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:26:27.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:27.922 --rc genhtml_branch_coverage=1 01:26:27.922 --rc genhtml_function_coverage=1 01:26:27.922 --rc genhtml_legend=1 01:26:27.922 --rc geninfo_all_blocks=1 01:26:27.922 --rc geninfo_unexecuted_blocks=1 01:26:27.922 01:26:27.922 ' 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:26:27.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:27.922 --rc genhtml_branch_coverage=1 01:26:27.922 --rc genhtml_function_coverage=1 01:26:27.922 --rc genhtml_legend=1 01:26:27.922 --rc geninfo_all_blocks=1 01:26:27.922 --rc geninfo_unexecuted_blocks=1 01:26:27.922 01:26:27.922 ' 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:26:27.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:27.922 --rc genhtml_branch_coverage=1 01:26:27.922 --rc genhtml_function_coverage=1 01:26:27.922 --rc genhtml_legend=1 01:26:27.922 --rc geninfo_all_blocks=1 01:26:27.922 --rc geninfo_unexecuted_blocks=1 01:26:27.922 01:26:27.922 ' 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:26:27.922 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:26:27.923 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:26:27.923 Cannot find device "nvmf_init_br" 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:26:27.923 Cannot find device "nvmf_init_br2" 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:26:27.923 Cannot find device "nvmf_tgt_br" 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:26:27.923 Cannot find device "nvmf_tgt_br2" 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:26:27.923 Cannot find device "nvmf_init_br" 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:26:27.923 Cannot find device "nvmf_init_br2" 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:26:27.923 Cannot find device "nvmf_tgt_br" 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:26:27.923 Cannot find device "nvmf_tgt_br2" 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:26:27.923 Cannot find device "nvmf_br" 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:26:27.923 Cannot find device "nvmf_init_if" 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:26:27.923 Cannot find device "nvmf_init_if2" 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:27.923 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:27.923 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:26:27.923 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:26:28.182 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:26:28.182 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 01:26:28.182 01:26:28.182 --- 10.0.0.3 ping statistics --- 01:26:28.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:28.182 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:26:28.182 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:26:28.182 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 01:26:28.182 01:26:28.182 --- 10.0.0.4 ping statistics --- 01:26:28.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:28.182 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:26:28.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:26:28.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 01:26:28.182 01:26:28.182 --- 10.0.0.1 ping statistics --- 01:26:28.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:28.182 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:26:28.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:26:28.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 01:26:28.182 01:26:28.182 --- 10.0.0.2 ping statistics --- 01:26:28.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:28.182 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:26:28.182 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 01:26:28.183 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:26:28.183 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:26:28.183 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:26:28.183 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:26:28.183 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:26:28.183 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:26:28.183 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:26:28.183 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 01:26:28.183 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:26:28.183 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 01:26:28.183 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:26:28.183 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=84337 01:26:28.183 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 84337 01:26:28.183 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:26:28.183 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 84337 ']' 01:26:28.183 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:28.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:28.183 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:28.183 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:28.183 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:28.183 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:26:28.183 [2024-12-09 05:21:19.699019] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:28.183 [2024-12-09 05:21:19.699068] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:26:28.440 [2024-12-09 05:21:19.831004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:28.440 [2024-12-09 05:21:19.865450] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:26:28.440 [2024-12-09 05:21:19.865597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:26:28.440 [2024-12-09 05:21:19.865659] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:26:28.440 [2024-12-09 05:21:19.865687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:26:28.440 [2024-12-09 05:21:19.865703] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:26:28.440 [2024-12-09 05:21:19.865958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:29.003 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:29.003 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 01:26:29.003 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:26:29.003 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 01:26:29.003 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:26:29.260 [2024-12-09 05:21:20.643916] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:26:29.260 Malloc0 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:26:29.260 [2024-12-09 05:21:20.678415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=84386 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=84387 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=84388 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 84386 01:26:29.260 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:26:29.260 [2024-12-09 05:21:20.846906] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:26:29.260 [2024-12-09 05:21:20.847074] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:26:29.260 [2024-12-09 05:21:20.847274] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:26:30.631 Initializing NVMe Controllers 01:26:30.631 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 01:26:30.631 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 01:26:30.631 Initialization complete. Launching workers. 01:26:30.631 ======================================================== 01:26:30.631 Latency(us) 01:26:30.631 Device Information : IOPS MiB/s Average min max 01:26:30.631 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 5984.00 23.38 166.84 95.01 383.42 01:26:30.631 ======================================================== 01:26:30.631 Total : 5984.00 23.38 166.84 95.01 383.42 01:26:30.631 01:26:30.631 Initializing NVMe Controllers 01:26:30.631 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 01:26:30.631 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 01:26:30.631 Initialization complete. Launching workers. 01:26:30.631 ======================================================== 01:26:30.631 Latency(us) 01:26:30.631 Device Information : IOPS MiB/s Average min max 01:26:30.631 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 5974.00 23.34 167.11 103.34 382.14 01:26:30.631 ======================================================== 01:26:30.631 Total : 5974.00 23.34 167.11 103.34 382.14 01:26:30.631 01:26:30.631 Initializing NVMe Controllers 01:26:30.631 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 01:26:30.631 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 01:26:30.631 Initialization complete. Launching workers. 01:26:30.631 ======================================================== 01:26:30.631 Latency(us) 01:26:30.631 Device Information : IOPS MiB/s Average min max 01:26:30.631 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 5974.97 23.34 167.11 119.64 365.80 01:26:30.631 ======================================================== 01:26:30.631 Total : 5974.97 23.34 167.11 119.64 365.80 01:26:30.631 01:26:30.631 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 84387 01:26:30.631 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 84388 01:26:30.631 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 01:26:30.631 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 01:26:30.631 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 01:26:30.631 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 01:26:30.631 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:26:30.631 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 01:26:30.631 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 01:26:30.631 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:26:30.631 rmmod nvme_tcp 01:26:30.631 rmmod nvme_fabrics 01:26:30.631 rmmod nvme_keyring 01:26:30.631 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:26:30.631 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 01:26:30.631 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 01:26:30.631 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 84337 ']' 01:26:30.631 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 84337 01:26:30.631 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 84337 ']' 01:26:30.631 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 84337 01:26:30.631 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 01:26:30.631 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:30.631 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84337 01:26:30.631 killing process with pid 84337 01:26:30.631 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:30.632 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:30.632 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84337' 01:26:30.632 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 84337 01:26:30.632 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 84337 01:26:30.632 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:26:30.632 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:26:30.632 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:26:30.632 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 01:26:30.632 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:26:30.632 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 01:26:30.632 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 01:26:30.632 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:26:30.632 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:26:30.632 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:26:30.632 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:26:30.632 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:26:30.632 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:26:30.889 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:26:30.889 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:26:30.889 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:26:30.889 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:26:30.889 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:26:30.889 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:26:30.889 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:26:30.890 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:30.890 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:30.890 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 01:26:30.890 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:30.890 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:30.890 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:30.890 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 01:26:30.890 01:26:30.890 real 0m3.198s 01:26:30.890 user 0m5.472s 01:26:30.890 sys 0m1.135s 01:26:30.890 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:30.890 ************************************ 01:26:30.890 END TEST nvmf_control_msg_list 01:26:30.890 ************************************ 01:26:30.890 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:26:30.890 05:21:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 01:26:30.890 05:21:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:26:30.890 05:21:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:30.890 05:21:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:26:30.890 ************************************ 01:26:30.890 START TEST nvmf_wait_for_buf 01:26:30.890 ************************************ 01:26:30.890 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 01:26:31.150 * Looking for test storage... 01:26:31.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:26:31.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:31.150 --rc genhtml_branch_coverage=1 01:26:31.150 --rc genhtml_function_coverage=1 01:26:31.150 --rc genhtml_legend=1 01:26:31.150 --rc geninfo_all_blocks=1 01:26:31.150 --rc geninfo_unexecuted_blocks=1 01:26:31.150 01:26:31.150 ' 01:26:31.150 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:26:31.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:31.150 --rc genhtml_branch_coverage=1 01:26:31.150 --rc genhtml_function_coverage=1 01:26:31.150 --rc genhtml_legend=1 01:26:31.150 --rc geninfo_all_blocks=1 01:26:31.150 --rc geninfo_unexecuted_blocks=1 01:26:31.151 01:26:31.151 ' 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:26:31.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:31.151 --rc genhtml_branch_coverage=1 01:26:31.151 --rc genhtml_function_coverage=1 01:26:31.151 --rc genhtml_legend=1 01:26:31.151 --rc geninfo_all_blocks=1 01:26:31.151 --rc geninfo_unexecuted_blocks=1 01:26:31.151 01:26:31.151 ' 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:26:31.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:31.151 --rc genhtml_branch_coverage=1 01:26:31.151 --rc genhtml_function_coverage=1 01:26:31.151 --rc genhtml_legend=1 01:26:31.151 --rc geninfo_all_blocks=1 01:26:31.151 --rc geninfo_unexecuted_blocks=1 01:26:31.151 01:26:31.151 ' 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:26:31.151 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:26:31.151 Cannot find device "nvmf_init_br" 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:26:31.151 Cannot find device "nvmf_init_br2" 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:26:31.151 Cannot find device "nvmf_tgt_br" 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:26:31.151 Cannot find device "nvmf_tgt_br2" 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:26:31.151 Cannot find device "nvmf_init_br" 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:26:31.151 Cannot find device "nvmf_init_br2" 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:26:31.151 Cannot find device "nvmf_tgt_br" 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:26:31.151 Cannot find device "nvmf_tgt_br2" 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:26:31.151 Cannot find device "nvmf_br" 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:26:31.151 Cannot find device "nvmf_init_if" 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:26:31.151 Cannot find device "nvmf_init_if2" 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:31.151 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:31.151 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:26:31.151 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:26:31.412 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:26:31.412 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 01:26:31.412 01:26:31.412 --- 10.0.0.3 ping statistics --- 01:26:31.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:31.412 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:26:31.412 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:26:31.412 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 01:26:31.412 01:26:31.412 --- 10.0.0.4 ping statistics --- 01:26:31.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:31.412 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:26:31.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:26:31.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 01:26:31.412 01:26:31.412 --- 10.0.0.1 ping statistics --- 01:26:31.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:31.412 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:26:31.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:26:31.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 01:26:31.412 01:26:31.412 --- 10.0.0.2 ping statistics --- 01:26:31.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:31.412 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=84620 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 84620 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 84620 ']' 01:26:31.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:31.412 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:26:31.412 [2024-12-09 05:21:22.984843] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:31.412 [2024-12-09 05:21:22.984895] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:26:31.673 [2024-12-09 05:21:23.124953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:31.673 [2024-12-09 05:21:23.177587] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:26:31.673 [2024-12-09 05:21:23.177628] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:26:31.673 [2024-12-09 05:21:23.177636] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:26:31.673 [2024-12-09 05:21:23.177642] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:26:31.673 [2024-12-09 05:21:23.177647] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:26:31.673 [2024-12-09 05:21:23.177955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:32.609 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:32.609 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 01:26:32.609 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:26:32.609 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 01:26:32.609 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:26:32.609 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:26:32.609 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 01:26:32.609 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 01:26:32.609 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 01:26:32.609 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:32.609 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:26:32.609 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:32.609 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 01:26:32.609 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:32.609 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:26:32.609 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:32.609 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 01:26:32.609 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:32.610 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:26:32.610 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:32.610 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 01:26:32.610 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:32.610 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:26:32.610 Malloc0 01:26:32.610 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:32.610 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 01:26:32.610 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:32.610 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:26:32.610 [2024-12-09 05:21:24.058768] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:26:32.610 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:32.610 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 01:26:32.610 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:32.610 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:26:32.610 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:32.610 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 01:26:32.610 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:32.610 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:26:32.610 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:32.610 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:26:32.610 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:32.610 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:26:32.610 [2024-12-09 05:21:24.082857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:26:32.610 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:32.610 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:26:32.867 [2024-12-09 05:21:24.274653] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:26:34.237 Initializing NVMe Controllers 01:26:34.237 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 01:26:34.237 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 01:26:34.237 Initialization complete. Launching workers. 01:26:34.237 ======================================================== 01:26:34.237 Latency(us) 01:26:34.237 Device Information : IOPS MiB/s Average min max 01:26:34.237 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.49 15.94 32498.53 8014.55 64003.64 01:26:34.237 ======================================================== 01:26:34.237 Total : 127.49 15.94 32498.53 8014.55 64003.64 01:26:34.237 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:26:34.237 rmmod nvme_tcp 01:26:34.237 rmmod nvme_fabrics 01:26:34.237 rmmod nvme_keyring 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 84620 ']' 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 84620 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 84620 ']' 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 84620 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 01:26:34.237 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:34.238 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84620 01:26:34.238 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:34.238 killing process with pid 84620 01:26:34.238 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:34.238 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84620' 01:26:34.238 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 84620 01:26:34.238 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 84620 01:26:34.496 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:26:34.496 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:26:34.496 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:26:34.496 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 01:26:34.496 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 01:26:34.496 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 01:26:34.496 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:26:34.496 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:26:34.496 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:26:34.496 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:26:34.496 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:26:34.496 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:26:34.496 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:26:34.496 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:26:34.496 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:26:34.496 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:26:34.496 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:26:34.496 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:26:34.496 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:26:34.496 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:26:34.496 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:34.496 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:34.754 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 01:26:34.754 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:34.754 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:34.754 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:34.754 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 01:26:34.754 01:26:34.754 real 0m3.706s 01:26:34.754 user 0m3.299s 01:26:34.754 sys 0m0.638s 01:26:34.754 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:34.754 ************************************ 01:26:34.754 END TEST nvmf_wait_for_buf 01:26:34.754 ************************************ 01:26:34.754 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:26:34.754 05:21:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 01:26:34.754 05:21:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 01:26:34.754 05:21:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 01:26:34.754 05:21:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:26:34.755 ************************************ 01:26:34.755 START TEST nvmf_nsid 01:26:34.755 ************************************ 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 01:26:34.755 * Looking for test storage... 01:26:34.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:26:34.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:34.755 --rc genhtml_branch_coverage=1 01:26:34.755 --rc genhtml_function_coverage=1 01:26:34.755 --rc genhtml_legend=1 01:26:34.755 --rc geninfo_all_blocks=1 01:26:34.755 --rc geninfo_unexecuted_blocks=1 01:26:34.755 01:26:34.755 ' 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:26:34.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:34.755 --rc genhtml_branch_coverage=1 01:26:34.755 --rc genhtml_function_coverage=1 01:26:34.755 --rc genhtml_legend=1 01:26:34.755 --rc geninfo_all_blocks=1 01:26:34.755 --rc geninfo_unexecuted_blocks=1 01:26:34.755 01:26:34.755 ' 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:26:34.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:34.755 --rc genhtml_branch_coverage=1 01:26:34.755 --rc genhtml_function_coverage=1 01:26:34.755 --rc genhtml_legend=1 01:26:34.755 --rc geninfo_all_blocks=1 01:26:34.755 --rc geninfo_unexecuted_blocks=1 01:26:34.755 01:26:34.755 ' 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:26:34.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:34.755 --rc genhtml_branch_coverage=1 01:26:34.755 --rc genhtml_function_coverage=1 01:26:34.755 --rc genhtml_legend=1 01:26:34.755 --rc geninfo_all_blocks=1 01:26:34.755 --rc geninfo_unexecuted_blocks=1 01:26:34.755 01:26:34.755 ' 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:26:34.755 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:26:35.015 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:26:35.015 Cannot find device "nvmf_init_br" 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:26:35.015 Cannot find device "nvmf_init_br2" 01:26:35.015 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:26:35.016 Cannot find device "nvmf_tgt_br" 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:26:35.016 Cannot find device "nvmf_tgt_br2" 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:26:35.016 Cannot find device "nvmf_init_br" 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:26:35.016 Cannot find device "nvmf_init_br2" 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:26:35.016 Cannot find device "nvmf_tgt_br" 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:26:35.016 Cannot find device "nvmf_tgt_br2" 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:26:35.016 Cannot find device "nvmf_br" 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:26:35.016 Cannot find device "nvmf_init_if" 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:26:35.016 Cannot find device "nvmf_init_if2" 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:35.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:35.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:26:35.016 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:26:35.274 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:26:35.274 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 01:26:35.274 01:26:35.274 --- 10.0.0.3 ping statistics --- 01:26:35.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:35.274 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:26:35.274 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:26:35.274 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 01:26:35.274 01:26:35.274 --- 10.0.0.4 ping statistics --- 01:26:35.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:35.274 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:26:35.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:26:35.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 01:26:35.274 01:26:35.274 --- 10.0.0.1 ping statistics --- 01:26:35.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:35.274 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:26:35.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:26:35.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 01:26:35.274 01:26:35.274 --- 10.0.0.2 ping statistics --- 01:26:35.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:35.274 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=84909 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 84909 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 84909 ']' 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 01:26:35.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:35.274 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:35.275 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:26:35.275 [2024-12-09 05:21:26.745521] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:35.275 [2024-12-09 05:21:26.745584] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:26:35.275 [2024-12-09 05:21:26.885583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:35.533 [2024-12-09 05:21:26.919913] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:26:35.533 [2024-12-09 05:21:26.919948] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:26:35.533 [2024-12-09 05:21:26.919955] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:26:35.533 [2024-12-09 05:21:26.919959] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:26:35.533 [2024-12-09 05:21:26.919964] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:26:35.533 [2024-12-09 05:21:26.920203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=84952 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=05de7837-4fd5-4c35-a53a-1bdb28766051 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=bf5776b0-2da2-4545-8adb-eff7e570412e 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=9d1558d6-88e5-430c-8ddd-2c75763a0cff 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:36.098 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:26:36.098 null0 01:26:36.098 null1 01:26:36.098 [2024-12-09 05:21:27.697733] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:36.098 [2024-12-09 05:21:27.697790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84952 ] 01:26:36.098 null2 01:26:36.098 [2024-12-09 05:21:27.706185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:26:36.355 [2024-12-09 05:21:27.730258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:26:36.355 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:36.355 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 84952 /var/tmp/tgt2.sock 01:26:36.355 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 84952 ']' 01:26:36.355 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 01:26:36.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 01:26:36.355 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:36.355 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 01:26:36.355 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:36.355 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:26:36.355 [2024-12-09 05:21:27.837485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:36.355 [2024-12-09 05:21:27.872972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:26:36.612 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:36.612 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 01:26:36.612 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 01:26:36.868 [2024-12-09 05:21:28.372081] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:26:36.868 [2024-12-09 05:21:28.388134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 01:26:36.868 nvme0n1 nvme0n2 01:26:36.868 nvme1n1 01:26:36.868 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 01:26:36.868 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 01:26:36.868 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:37.126 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 01:26:37.126 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 01:26:37.126 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 01:26:37.126 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 01:26:37.126 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 01:26:37.126 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 01:26:37.126 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 01:26:37.126 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 01:26:37.126 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:26:37.126 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 01:26:37.126 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 01:26:37.126 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 01:26:37.126 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 05de7837-4fd5-4c35-a53a-1bdb28766051 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=05de78374fd54c35a53a1bdb28766051 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 05DE78374FD54C35A53A1BDB28766051 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 05DE78374FD54C35A53A1BDB28766051 == \0\5\D\E\7\8\3\7\4\F\D\5\4\C\3\5\A\5\3\A\1\B\D\B\2\8\7\6\6\0\5\1 ]] 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid bf5776b0-2da2-4545-8adb-eff7e570412e 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 01:26:38.058 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=bf5776b02da245458adbeff7e570412e 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo BF5776B02DA245458ADBEFF7E570412E 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ BF5776B02DA245458ADBEFF7E570412E == \B\F\5\7\7\6\B\0\2\D\A\2\4\5\4\5\8\A\D\B\E\F\F\7\E\5\7\0\4\1\2\E ]] 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 9d1558d6-88e5-430c-8ddd-2c75763a0cff 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9d1558d688e5430c8ddd2c75763a0cff 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9D1558D688E5430C8DDD2C75763A0CFF 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 9D1558D688E5430C8DDD2C75763A0CFF == \9\D\1\5\5\8\D\6\8\8\E\5\4\3\0\C\8\D\D\D\2\C\7\5\7\6\3\A\0\C\F\F ]] 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 84952 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 84952 ']' 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 84952 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84952 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:26:38.316 killing process with pid 84952 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84952' 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 84952 01:26:38.316 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 84952 01:26:38.573 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 01:26:38.573 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 01:26:38.573 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:26:38.831 rmmod nvme_tcp 01:26:38.831 rmmod nvme_fabrics 01:26:38.831 rmmod nvme_keyring 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 84909 ']' 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 84909 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 84909 ']' 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 84909 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84909 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:38.831 killing process with pid 84909 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84909' 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 84909 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 84909 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:26:38.831 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:26:39.088 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:26:39.088 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:26:39.088 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:26:39.088 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:26:39.088 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:26:39.088 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:26:39.088 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:26:39.088 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:26:39.088 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:39.088 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:39.088 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 01:26:39.088 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:39.088 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:39.088 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:39.088 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 01:26:39.088 01:26:39.088 real 0m4.417s 01:26:39.088 user 0m6.638s 01:26:39.088 sys 0m0.965s 01:26:39.088 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:39.088 ************************************ 01:26:39.088 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:26:39.088 END TEST nvmf_nsid 01:26:39.088 ************************************ 01:26:39.088 05:21:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 01:26:39.088 01:26:39.088 real 6m21.035s 01:26:39.088 user 15m24.950s 01:26:39.088 sys 1m2.639s 01:26:39.088 05:21:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:39.088 05:21:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:26:39.088 ************************************ 01:26:39.088 END TEST nvmf_target_extra 01:26:39.088 ************************************ 01:26:39.345 05:21:30 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 01:26:39.345 05:21:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:26:39.345 05:21:30 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:39.345 05:21:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:26:39.345 ************************************ 01:26:39.345 START TEST nvmf_host 01:26:39.345 ************************************ 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 01:26:39.345 * Looking for test storage... 01:26:39.345 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 01:26:39.345 05:21:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:26:39.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:39.346 --rc genhtml_branch_coverage=1 01:26:39.346 --rc genhtml_function_coverage=1 01:26:39.346 --rc genhtml_legend=1 01:26:39.346 --rc geninfo_all_blocks=1 01:26:39.346 --rc geninfo_unexecuted_blocks=1 01:26:39.346 01:26:39.346 ' 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:26:39.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:39.346 --rc genhtml_branch_coverage=1 01:26:39.346 --rc genhtml_function_coverage=1 01:26:39.346 --rc genhtml_legend=1 01:26:39.346 --rc geninfo_all_blocks=1 01:26:39.346 --rc geninfo_unexecuted_blocks=1 01:26:39.346 01:26:39.346 ' 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:26:39.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:39.346 --rc genhtml_branch_coverage=1 01:26:39.346 --rc genhtml_function_coverage=1 01:26:39.346 --rc genhtml_legend=1 01:26:39.346 --rc geninfo_all_blocks=1 01:26:39.346 --rc geninfo_unexecuted_blocks=1 01:26:39.346 01:26:39.346 ' 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:26:39.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:39.346 --rc genhtml_branch_coverage=1 01:26:39.346 --rc genhtml_function_coverage=1 01:26:39.346 --rc genhtml_legend=1 01:26:39.346 --rc geninfo_all_blocks=1 01:26:39.346 --rc geninfo_unexecuted_blocks=1 01:26:39.346 01:26:39.346 ' 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:26:39.346 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:26:39.346 ************************************ 01:26:39.346 START TEST nvmf_multicontroller 01:26:39.346 ************************************ 01:26:39.346 05:21:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 01:26:39.346 * Looking for test storage... 01:26:39.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:26:39.604 05:21:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:26:39.604 05:21:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 01:26:39.604 05:21:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:26:39.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:39.604 --rc genhtml_branch_coverage=1 01:26:39.604 --rc genhtml_function_coverage=1 01:26:39.604 --rc genhtml_legend=1 01:26:39.604 --rc geninfo_all_blocks=1 01:26:39.604 --rc geninfo_unexecuted_blocks=1 01:26:39.604 01:26:39.604 ' 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:26:39.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:39.604 --rc genhtml_branch_coverage=1 01:26:39.604 --rc genhtml_function_coverage=1 01:26:39.604 --rc genhtml_legend=1 01:26:39.604 --rc geninfo_all_blocks=1 01:26:39.604 --rc geninfo_unexecuted_blocks=1 01:26:39.604 01:26:39.604 ' 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:26:39.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:39.604 --rc genhtml_branch_coverage=1 01:26:39.604 --rc genhtml_function_coverage=1 01:26:39.604 --rc genhtml_legend=1 01:26:39.604 --rc geninfo_all_blocks=1 01:26:39.604 --rc geninfo_unexecuted_blocks=1 01:26:39.604 01:26:39.604 ' 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:26:39.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:39.604 --rc genhtml_branch_coverage=1 01:26:39.604 --rc genhtml_function_coverage=1 01:26:39.604 --rc genhtml_legend=1 01:26:39.604 --rc geninfo_all_blocks=1 01:26:39.604 --rc geninfo_unexecuted_blocks=1 01:26:39.604 01:26:39.604 ' 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:26:39.604 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:26:39.605 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@460 -- # nvmf_veth_init 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:26:39.605 Cannot find device "nvmf_init_br" 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:26:39.605 Cannot find device "nvmf_init_br2" 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:26:39.605 Cannot find device "nvmf_tgt_br" 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:26:39.605 Cannot find device "nvmf_tgt_br2" 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:26:39.605 Cannot find device "nvmf_init_br" 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:26:39.605 Cannot find device "nvmf_init_br2" 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:26:39.605 Cannot find device "nvmf_tgt_br" 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:26:39.605 Cannot find device "nvmf_tgt_br2" 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:26:39.605 Cannot find device "nvmf_br" 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:26:39.605 Cannot find device "nvmf_init_if" 01:26:39.605 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true 01:26:39.606 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:26:39.606 Cannot find device "nvmf_init_if2" 01:26:39.606 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true 01:26:39.606 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:39.606 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:39.606 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true 01:26:39.606 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:39.606 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:39.606 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true 01:26:39.606 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:26:39.606 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:26:39.606 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:26:39.606 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:26:39.606 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:26:39.606 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:26:39.862 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:26:39.862 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 01:26:39.862 01:26:39.862 --- 10.0.0.3 ping statistics --- 01:26:39.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:39.862 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:26:39.862 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:26:39.862 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 01:26:39.862 01:26:39.862 --- 10.0.0.4 ping statistics --- 01:26:39.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:39.862 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:26:39.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:26:39.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 01:26:39.862 01:26:39.862 --- 10.0.0.1 ping statistics --- 01:26:39.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:39.862 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:26:39.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:26:39.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 01:26:39.862 01:26:39.862 --- 10.0.0.2 ping statistics --- 01:26:39.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:39.862 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@461 -- # return 0 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:26:39.862 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:26:39.863 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:26:39.863 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:26:39.863 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:26:39.863 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 01:26:39.863 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:26:39.863 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 01:26:39.863 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:39.863 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=85311 01:26:39.863 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 01:26:39.863 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 85311 01:26:39.863 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 85311 ']' 01:26:39.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:39.863 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:39.863 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:39.863 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:39.863 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:39.863 05:21:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:39.863 [2024-12-09 05:21:31.417983] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:39.863 [2024-12-09 05:21:31.418124] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:26:40.118 [2024-12-09 05:21:31.556653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:26:40.118 [2024-12-09 05:21:31.592545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:26:40.118 [2024-12-09 05:21:31.592588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:26:40.118 [2024-12-09 05:21:31.592594] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:26:40.118 [2024-12-09 05:21:31.592599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:26:40.118 [2024-12-09 05:21:31.592604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:26:40.118 [2024-12-09 05:21:31.593392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:26:40.118 [2024-12-09 05:21:31.593691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:26:40.118 [2024-12-09 05:21:31.593853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:26:40.682 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:40.682 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 01:26:40.682 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:26:40.682 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 01:26:40.682 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:40.940 [2024-12-09 05:21:32.330646] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:40.940 Malloc0 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:40.940 [2024-12-09 05:21:32.390495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:40.940 [2024-12-09 05:21:32.398447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:40.940 Malloc1 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=85368 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 85368 /var/tmp/bdevperf.sock 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 85368 ']' 01:26:40.940 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:26:40.941 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:40.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:26:40.941 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:26:40.941 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:40.941 05:21:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:41.875 NVMe0n1 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.875 1 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:41.875 2024/12/09 05:21:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 01:26:41.875 request: 01:26:41.875 { 01:26:41.875 "method": "bdev_nvme_attach_controller", 01:26:41.875 "params": { 01:26:41.875 "name": "NVMe0", 01:26:41.875 "trtype": "tcp", 01:26:41.875 "traddr": "10.0.0.3", 01:26:41.875 "adrfam": "ipv4", 01:26:41.875 "trsvcid": "4420", 01:26:41.875 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:26:41.875 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 01:26:41.875 "hostaddr": "10.0.0.1", 01:26:41.875 "prchk_reftag": false, 01:26:41.875 "prchk_guard": false, 01:26:41.875 "hdgst": false, 01:26:41.875 "ddgst": false, 01:26:41.875 "allow_unrecognized_csi": false 01:26:41.875 } 01:26:41.875 } 01:26:41.875 Got JSON-RPC error response 01:26:41.875 GoRPCClient: error on JSON-RPC call 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.875 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:41.875 2024/12/09 05:21:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 01:26:41.875 request: 01:26:41.875 { 01:26:41.875 "method": "bdev_nvme_attach_controller", 01:26:41.875 "params": { 01:26:41.875 "name": "NVMe0", 01:26:41.875 "trtype": "tcp", 01:26:41.875 "traddr": "10.0.0.3", 01:26:41.875 "adrfam": "ipv4", 01:26:41.875 "trsvcid": "4420", 01:26:41.876 "subnqn": "nqn.2016-06.io.spdk:cnode2", 01:26:41.876 "hostaddr": "10.0.0.1", 01:26:41.876 "prchk_reftag": false, 01:26:41.876 "prchk_guard": false, 01:26:41.876 "hdgst": false, 01:26:41.876 "ddgst": false, 01:26:41.876 "allow_unrecognized_csi": false 01:26:41.876 } 01:26:41.876 } 01:26:41.876 Got JSON-RPC error response 01:26:41.876 GoRPCClient: error on JSON-RPC call 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:41.876 2024/12/09 05:21:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 01:26:41.876 request: 01:26:41.876 { 01:26:41.876 "method": "bdev_nvme_attach_controller", 01:26:41.876 "params": { 01:26:41.876 "name": "NVMe0", 01:26:41.876 "trtype": "tcp", 01:26:41.876 "traddr": "10.0.0.3", 01:26:41.876 "adrfam": "ipv4", 01:26:41.876 "trsvcid": "4420", 01:26:41.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:26:41.876 "hostaddr": "10.0.0.1", 01:26:41.876 "prchk_reftag": false, 01:26:41.876 "prchk_guard": false, 01:26:41.876 "hdgst": false, 01:26:41.876 "ddgst": false, 01:26:41.876 "multipath": "disable", 01:26:41.876 "allow_unrecognized_csi": false 01:26:41.876 } 01:26:41.876 } 01:26:41.876 Got JSON-RPC error response 01:26:41.876 GoRPCClient: error on JSON-RPC call 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.876 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:42.178 2024/12/09 05:21:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 01:26:42.178 request: 01:26:42.178 { 01:26:42.178 "method": "bdev_nvme_attach_controller", 01:26:42.178 "params": { 01:26:42.178 "name": "NVMe0", 01:26:42.178 "trtype": "tcp", 01:26:42.178 "traddr": "10.0.0.3", 01:26:42.178 "adrfam": "ipv4", 01:26:42.178 "trsvcid": "4420", 01:26:42.178 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:26:42.178 "hostaddr": "10.0.0.1", 01:26:42.178 "prchk_reftag": false, 01:26:42.178 "prchk_guard": false, 01:26:42.178 "hdgst": false, 01:26:42.178 "ddgst": false, 01:26:42.178 "multipath": "failover", 01:26:42.178 "allow_unrecognized_csi": false 01:26:42.178 } 01:26:42.178 } 01:26:42.178 Got JSON-RPC error response 01:26:42.178 GoRPCClient: error on JSON-RPC call 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:42.178 NVMe0n1 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:42.178 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 01:26:42.178 05:21:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:26:43.580 { 01:26:43.580 "results": [ 01:26:43.580 { 01:26:43.580 "job": "NVMe0n1", 01:26:43.580 "core_mask": "0x1", 01:26:43.580 "workload": "write", 01:26:43.580 "status": "finished", 01:26:43.580 "queue_depth": 128, 01:26:43.580 "io_size": 4096, 01:26:43.580 "runtime": 1.006077, 01:26:43.580 "iops": 23382.902103914512, 01:26:43.580 "mibps": 91.33946134341606, 01:26:43.580 "io_failed": 0, 01:26:43.580 "io_timeout": 0, 01:26:43.580 "avg_latency_us": 5465.232831586692, 01:26:43.580 "min_latency_us": 2419.7907692307695, 01:26:43.580 "max_latency_us": 20870.695384615385 01:26:43.580 } 01:26:43.580 ], 01:26:43.580 "core_count": 1 01:26:43.580 } 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]] 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:43.580 nvme1n1 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:43.580 nvme1n1 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:43.580 05:21:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]] 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 85368 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 85368 ']' 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 85368 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85368 01:26:43.580 killing process with pid 85368 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85368' 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 85368 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 85368 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 01:26:43.580 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 01:26:43.580 [2024-12-09 05:21:32.483249] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:43.580 [2024-12-09 05:21:32.483639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85368 ] 01:26:43.580 [2024-12-09 05:21:32.621557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:43.580 [2024-12-09 05:21:32.657829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:43.580 [2024-12-09 05:21:33.638085] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 035469a9-8313-4ff0-b1bd-b30f0f6c3482 already exists 01:26:43.580 [2024-12-09 05:21:33.638123] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:035469a9-8313-4ff0-b1bd-b30f0f6c3482 alias for bdev NVMe1n1 01:26:43.580 [2024-12-09 05:21:33.638133] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 01:26:43.580 Running I/O for 1 seconds... 01:26:43.580 23397.00 IOPS, 91.39 MiB/s 01:26:43.580 Latency(us) 01:26:43.580 [2024-12-09T05:21:35.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:26:43.580 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 01:26:43.580 NVMe0n1 : 1.01 23382.90 91.34 0.00 0.00 5465.23 2419.79 20870.70 01:26:43.580 [2024-12-09T05:21:35.197Z] =================================================================================================================== 01:26:43.580 [2024-12-09T05:21:35.197Z] Total : 23382.90 91.34 0.00 0.00 5465.23 2419.79 20870.70 01:26:43.580 Received shutdown signal, test time was about 1.000000 seconds 01:26:43.580 01:26:43.580 Latency(us) 01:26:43.580 [2024-12-09T05:21:35.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:26:43.580 [2024-12-09T05:21:35.197Z] =================================================================================================================== 01:26:43.580 [2024-12-09T05:21:35.197Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:26:43.580 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 01:26:43.580 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:26:43.844 rmmod nvme_tcp 01:26:43.844 rmmod nvme_fabrics 01:26:43.844 rmmod nvme_keyring 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 85311 ']' 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 85311 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 85311 ']' 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 85311 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85311 01:26:43.844 killing process with pid 85311 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85311' 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 85311 01:26:43.844 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 85311 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0 01:26:44.101 ************************************ 01:26:44.101 END TEST nvmf_multicontroller 01:26:44.101 ************************************ 01:26:44.101 01:26:44.101 real 0m4.802s 01:26:44.101 user 0m14.846s 01:26:44.101 sys 0m0.825s 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:44.101 05:21:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:26:44.359 05:21:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 01:26:44.359 05:21:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:26:44.360 ************************************ 01:26:44.360 START TEST nvmf_aer 01:26:44.360 ************************************ 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 01:26:44.360 * Looking for test storage... 01:26:44.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:26:44.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:44.360 --rc genhtml_branch_coverage=1 01:26:44.360 --rc genhtml_function_coverage=1 01:26:44.360 --rc genhtml_legend=1 01:26:44.360 --rc geninfo_all_blocks=1 01:26:44.360 --rc geninfo_unexecuted_blocks=1 01:26:44.360 01:26:44.360 ' 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:26:44.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:44.360 --rc genhtml_branch_coverage=1 01:26:44.360 --rc genhtml_function_coverage=1 01:26:44.360 --rc genhtml_legend=1 01:26:44.360 --rc geninfo_all_blocks=1 01:26:44.360 --rc geninfo_unexecuted_blocks=1 01:26:44.360 01:26:44.360 ' 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:26:44.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:44.360 --rc genhtml_branch_coverage=1 01:26:44.360 --rc genhtml_function_coverage=1 01:26:44.360 --rc genhtml_legend=1 01:26:44.360 --rc geninfo_all_blocks=1 01:26:44.360 --rc geninfo_unexecuted_blocks=1 01:26:44.360 01:26:44.360 ' 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:26:44.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:44.360 --rc genhtml_branch_coverage=1 01:26:44.360 --rc genhtml_function_coverage=1 01:26:44.360 --rc genhtml_legend=1 01:26:44.360 --rc geninfo_all_blocks=1 01:26:44.360 --rc geninfo_unexecuted_blocks=1 01:26:44.360 01:26:44.360 ' 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:26:44.360 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:26:44.360 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@460 -- # nvmf_veth_init 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:26:44.361 Cannot find device "nvmf_init_br" 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:26:44.361 Cannot find device "nvmf_init_br2" 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:26:44.361 Cannot find device "nvmf_tgt_br" 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:26:44.361 Cannot find device "nvmf_tgt_br2" 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:26:44.361 Cannot find device "nvmf_init_br" 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:26:44.361 Cannot find device "nvmf_init_br2" 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:26:44.361 Cannot find device "nvmf_tgt_br" 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:26:44.361 Cannot find device "nvmf_tgt_br2" 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true 01:26:44.361 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:26:44.619 Cannot find device "nvmf_br" 01:26:44.619 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true 01:26:44.619 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:26:44.619 Cannot find device "nvmf_init_if" 01:26:44.619 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true 01:26:44.619 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:26:44.619 Cannot find device "nvmf_init_if2" 01:26:44.619 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true 01:26:44.619 05:21:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:44.619 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:44.619 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:26:44.619 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:26:44.619 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 01:26:44.619 01:26:44.619 --- 10.0.0.3 ping statistics --- 01:26:44.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:44.619 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:26:44.619 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:26:44.619 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 01:26:44.619 01:26:44.619 --- 10.0.0.4 ping statistics --- 01:26:44.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:44.619 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:26:44.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:26:44.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 01:26:44.619 01:26:44.619 --- 10.0.0.1 ping statistics --- 01:26:44.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:44.619 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:26:44.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:26:44.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 01:26:44.619 01:26:44.619 --- 10.0.0.2 ping statistics --- 01:26:44.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:44.619 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@461 -- # return 0 01:26:44.619 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:26:44.620 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:26:44.620 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:26:44.620 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:26:44.620 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:26:44.620 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:26:44.620 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:26:44.620 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 01:26:44.620 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:26:44.620 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 01:26:44.620 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:26:44.620 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=85675 01:26:44.620 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 85675 01:26:44.620 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 85675 ']' 01:26:44.620 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:44.620 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:26:44.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:44.620 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:44.620 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:44.620 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:44.620 05:21:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:26:44.877 [2024-12-09 05:21:36.270855] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:44.877 [2024-12-09 05:21:36.270907] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:26:44.877 [2024-12-09 05:21:36.411811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:26:44.877 [2024-12-09 05:21:36.447833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:26:44.877 [2024-12-09 05:21:36.447869] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:26:44.877 [2024-12-09 05:21:36.447876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:26:44.877 [2024-12-09 05:21:36.447881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:26:44.877 [2024-12-09 05:21:36.447885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:26:44.877 [2024-12-09 05:21:36.448703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:26:44.877 [2024-12-09 05:21:36.449129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:26:44.877 [2024-12-09 05:21:36.449964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:26:44.877 [2024-12-09 05:21:36.450064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:26:45.811 [2024-12-09 05:21:37.193187] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:26:45.811 Malloc0 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:26:45.811 [2024-12-09 05:21:37.250312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:26:45.811 [ 01:26:45.811 { 01:26:45.811 "allow_any_host": true, 01:26:45.811 "hosts": [], 01:26:45.811 "listen_addresses": [], 01:26:45.811 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:26:45.811 "subtype": "Discovery" 01:26:45.811 }, 01:26:45.811 { 01:26:45.811 "allow_any_host": true, 01:26:45.811 "hosts": [], 01:26:45.811 "listen_addresses": [ 01:26:45.811 { 01:26:45.811 "adrfam": "IPv4", 01:26:45.811 "traddr": "10.0.0.3", 01:26:45.811 "trsvcid": "4420", 01:26:45.811 "trtype": "TCP" 01:26:45.811 } 01:26:45.811 ], 01:26:45.811 "max_cntlid": 65519, 01:26:45.811 "max_namespaces": 2, 01:26:45.811 "min_cntlid": 1, 01:26:45.811 "model_number": "SPDK bdev Controller", 01:26:45.811 "namespaces": [ 01:26:45.811 { 01:26:45.811 "bdev_name": "Malloc0", 01:26:45.811 "name": "Malloc0", 01:26:45.811 "nguid": "95FE94DA44FD469280455DE6A92AE4E9", 01:26:45.811 "nsid": 1, 01:26:45.811 "uuid": "95fe94da-44fd-4692-8045-5de6a92ae4e9" 01:26:45.811 } 01:26:45.811 ], 01:26:45.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:26:45.811 "serial_number": "SPDK00000000000001", 01:26:45.811 "subtype": "NVMe" 01:26:45.811 } 01:26:45.811 ] 01:26:45.811 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:45.812 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 01:26:45.812 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 01:26:45.812 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=85729 01:26:45.812 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 01:26:45.812 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 01:26:45.812 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 01:26:45.812 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 01:26:45.812 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 01:26:45.812 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 01:26:45.812 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 01:26:45.812 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 01:26:45.812 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 01:26:45.812 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 01:26:45.812 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:26:46.069 Malloc1 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:26:46.069 Asynchronous Event Request test 01:26:46.069 Attaching to 10.0.0.3 01:26:46.069 Attached to 10.0.0.3 01:26:46.069 Registering asynchronous event callbacks... 01:26:46.069 Starting namespace attribute notice tests for all controllers... 01:26:46.069 10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 01:26:46.069 aer_cb - Changed Namespace 01:26:46.069 Cleaning up... 01:26:46.069 [ 01:26:46.069 { 01:26:46.069 "allow_any_host": true, 01:26:46.069 "hosts": [], 01:26:46.069 "listen_addresses": [], 01:26:46.069 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:26:46.069 "subtype": "Discovery" 01:26:46.069 }, 01:26:46.069 { 01:26:46.069 "allow_any_host": true, 01:26:46.069 "hosts": [], 01:26:46.069 "listen_addresses": [ 01:26:46.069 { 01:26:46.069 "adrfam": "IPv4", 01:26:46.069 "traddr": "10.0.0.3", 01:26:46.069 "trsvcid": "4420", 01:26:46.069 "trtype": "TCP" 01:26:46.069 } 01:26:46.069 ], 01:26:46.069 "max_cntlid": 65519, 01:26:46.069 "max_namespaces": 2, 01:26:46.069 "min_cntlid": 1, 01:26:46.069 "model_number": "SPDK bdev Controller", 01:26:46.069 "namespaces": [ 01:26:46.069 { 01:26:46.069 "bdev_name": "Malloc0", 01:26:46.069 "name": "Malloc0", 01:26:46.069 "nguid": "95FE94DA44FD469280455DE6A92AE4E9", 01:26:46.069 "nsid": 1, 01:26:46.069 "uuid": "95fe94da-44fd-4692-8045-5de6a92ae4e9" 01:26:46.069 }, 01:26:46.069 { 01:26:46.069 "bdev_name": "Malloc1", 01:26:46.069 "name": "Malloc1", 01:26:46.069 "nguid": "25BB2687CEA74AE9B103241B71378DED", 01:26:46.069 "nsid": 2, 01:26:46.069 "uuid": "25bb2687-cea7-4ae9-b103-241b71378ded" 01:26:46.069 } 01:26:46.069 ], 01:26:46.069 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:26:46.069 "serial_number": "SPDK00000000000001", 01:26:46.069 "subtype": "NVMe" 01:26:46.069 } 01:26:46.069 ] 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 85729 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:26:46.069 rmmod nvme_tcp 01:26:46.069 rmmod nvme_fabrics 01:26:46.069 rmmod nvme_keyring 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 85675 ']' 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 85675 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 85675 ']' 01:26:46.069 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 85675 01:26:46.070 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 01:26:46.070 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:46.070 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85675 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:46.327 killing process with pid 85675 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85675' 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 85675 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 85675 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:26:46.327 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:26:46.585 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:26:46.585 05:21:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:26:46.585 05:21:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:46.585 05:21:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:46.585 05:21:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns 01:26:46.585 05:21:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:46.585 05:21:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:46.585 05:21:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:46.585 05:21:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0 01:26:46.585 01:26:46.585 real 0m2.332s 01:26:46.585 user 0m5.814s 01:26:46.585 sys 0m0.563s 01:26:46.585 05:21:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:46.585 ************************************ 01:26:46.585 END TEST nvmf_aer 01:26:46.585 05:21:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:26:46.585 ************************************ 01:26:46.585 05:21:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 01:26:46.585 05:21:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:26:46.585 05:21:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:46.585 05:21:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:26:46.585 ************************************ 01:26:46.585 START TEST nvmf_async_init 01:26:46.585 ************************************ 01:26:46.585 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 01:26:46.585 * Looking for test storage... 01:26:46.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:26:46.585 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:26:46.585 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 01:26:46.585 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:26:46.842 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:26:46.842 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:26:46.842 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:26:46.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:46.843 --rc genhtml_branch_coverage=1 01:26:46.843 --rc genhtml_function_coverage=1 01:26:46.843 --rc genhtml_legend=1 01:26:46.843 --rc geninfo_all_blocks=1 01:26:46.843 --rc geninfo_unexecuted_blocks=1 01:26:46.843 01:26:46.843 ' 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:26:46.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:46.843 --rc genhtml_branch_coverage=1 01:26:46.843 --rc genhtml_function_coverage=1 01:26:46.843 --rc genhtml_legend=1 01:26:46.843 --rc geninfo_all_blocks=1 01:26:46.843 --rc geninfo_unexecuted_blocks=1 01:26:46.843 01:26:46.843 ' 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:26:46.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:46.843 --rc genhtml_branch_coverage=1 01:26:46.843 --rc genhtml_function_coverage=1 01:26:46.843 --rc genhtml_legend=1 01:26:46.843 --rc geninfo_all_blocks=1 01:26:46.843 --rc geninfo_unexecuted_blocks=1 01:26:46.843 01:26:46.843 ' 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:26:46.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:46.843 --rc genhtml_branch_coverage=1 01:26:46.843 --rc genhtml_function_coverage=1 01:26:46.843 --rc genhtml_legend=1 01:26:46.843 --rc geninfo_all_blocks=1 01:26:46.843 --rc geninfo_unexecuted_blocks=1 01:26:46.843 01:26:46.843 ' 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:26:46.843 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=fd53bb3aeb30411fa0ed3c6febb7765e 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 01:26:46.843 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@460 -- # nvmf_veth_init 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:26:46.844 Cannot find device "nvmf_init_br" 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:26:46.844 Cannot find device "nvmf_init_br2" 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:26:46.844 Cannot find device "nvmf_tgt_br" 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:26:46.844 Cannot find device "nvmf_tgt_br2" 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:26:46.844 Cannot find device "nvmf_init_br" 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:26:46.844 Cannot find device "nvmf_init_br2" 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:26:46.844 Cannot find device "nvmf_tgt_br" 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:26:46.844 Cannot find device "nvmf_tgt_br2" 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:26:46.844 Cannot find device "nvmf_br" 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:26:46.844 Cannot find device "nvmf_init_if" 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:26:46.844 Cannot find device "nvmf_init_if2" 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:46.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:46.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:26:46.844 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:26:47.102 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:26:47.102 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 01:26:47.102 01:26:47.102 --- 10.0.0.3 ping statistics --- 01:26:47.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:47.102 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:26:47.102 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:26:47.102 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 01:26:47.102 01:26:47.102 --- 10.0.0.4 ping statistics --- 01:26:47.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:47.102 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:26:47.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:26:47.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 01:26:47.102 01:26:47.102 --- 10.0.0.1 ping statistics --- 01:26:47.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:47.102 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:26:47.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:26:47.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 01:26:47.102 01:26:47.102 --- 10.0.0.2 ping statistics --- 01:26:47.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:47.102 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@461 -- # return 0 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=85958 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 85958 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 85958 ']' 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:47.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:26:47.102 05:21:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 01:26:47.102 [2024-12-09 05:21:38.712250] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:47.102 [2024-12-09 05:21:38.712301] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:26:47.360 [2024-12-09 05:21:38.850302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:47.360 [2024-12-09 05:21:38.885177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:26:47.360 [2024-12-09 05:21:38.885209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:26:47.360 [2024-12-09 05:21:38.885215] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:26:47.360 [2024-12-09 05:21:38.885220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:26:47.360 [2024-12-09 05:21:38.885225] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:26:47.360 [2024-12-09 05:21:38.885461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:26:48.292 [2024-12-09 05:21:39.635008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:26:48.292 null0 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g fd53bb3aeb30411fa0ed3c6febb7765e 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:26:48.292 [2024-12-09 05:21:39.675085] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.292 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:26:48.550 nvme0n1 01:26:48.550 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.550 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 01:26:48.550 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.550 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:26:48.550 [ 01:26:48.550 { 01:26:48.550 "aliases": [ 01:26:48.550 "fd53bb3a-eb30-411f-a0ed-3c6febb7765e" 01:26:48.550 ], 01:26:48.550 "assigned_rate_limits": { 01:26:48.550 "r_mbytes_per_sec": 0, 01:26:48.550 "rw_ios_per_sec": 0, 01:26:48.550 "rw_mbytes_per_sec": 0, 01:26:48.550 "w_mbytes_per_sec": 0 01:26:48.550 }, 01:26:48.550 "block_size": 512, 01:26:48.550 "claimed": false, 01:26:48.550 "driver_specific": { 01:26:48.550 "mp_policy": "active_passive", 01:26:48.550 "nvme": [ 01:26:48.550 { 01:26:48.550 "ctrlr_data": { 01:26:48.550 "ana_reporting": false, 01:26:48.550 "cntlid": 1, 01:26:48.550 "firmware_revision": "25.01", 01:26:48.550 "model_number": "SPDK bdev Controller", 01:26:48.550 "multi_ctrlr": true, 01:26:48.550 "oacs": { 01:26:48.550 "firmware": 0, 01:26:48.550 "format": 0, 01:26:48.550 "ns_manage": 0, 01:26:48.550 "security": 0 01:26:48.550 }, 01:26:48.550 "serial_number": "00000000000000000000", 01:26:48.550 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:26:48.550 "vendor_id": "0x8086" 01:26:48.550 }, 01:26:48.550 "ns_data": { 01:26:48.550 "can_share": true, 01:26:48.550 "id": 1 01:26:48.550 }, 01:26:48.550 "trid": { 01:26:48.550 "adrfam": "IPv4", 01:26:48.550 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:26:48.550 "traddr": "10.0.0.3", 01:26:48.550 "trsvcid": "4420", 01:26:48.550 "trtype": "TCP" 01:26:48.550 }, 01:26:48.550 "vs": { 01:26:48.550 "nvme_version": "1.3" 01:26:48.550 } 01:26:48.550 } 01:26:48.550 ] 01:26:48.550 }, 01:26:48.550 "memory_domains": [ 01:26:48.550 { 01:26:48.550 "dma_device_id": "system", 01:26:48.550 "dma_device_type": 1 01:26:48.550 } 01:26:48.550 ], 01:26:48.550 "name": "nvme0n1", 01:26:48.550 "num_blocks": 2097152, 01:26:48.550 "numa_id": -1, 01:26:48.550 "product_name": "NVMe disk", 01:26:48.550 "supported_io_types": { 01:26:48.550 "abort": true, 01:26:48.550 "compare": true, 01:26:48.550 "compare_and_write": true, 01:26:48.550 "copy": true, 01:26:48.550 "flush": true, 01:26:48.550 "get_zone_info": false, 01:26:48.550 "nvme_admin": true, 01:26:48.550 "nvme_io": true, 01:26:48.550 "nvme_io_md": false, 01:26:48.550 "nvme_iov_md": false, 01:26:48.550 "read": true, 01:26:48.550 "reset": true, 01:26:48.550 "seek_data": false, 01:26:48.550 "seek_hole": false, 01:26:48.550 "unmap": false, 01:26:48.550 "write": true, 01:26:48.550 "write_zeroes": true, 01:26:48.550 "zcopy": false, 01:26:48.550 "zone_append": false, 01:26:48.550 "zone_management": false 01:26:48.550 }, 01:26:48.550 "uuid": "fd53bb3a-eb30-411f-a0ed-3c6febb7765e", 01:26:48.550 "zoned": false 01:26:48.550 } 01:26:48.550 ] 01:26:48.550 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.550 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 01:26:48.550 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.550 05:21:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:26:48.550 [2024-12-09 05:21:39.935586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:26:48.550 [2024-12-09 05:21:39.935680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1781500 (9): Bad file descriptor 01:26:48.550 [2024-12-09 05:21:40.077669] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 01:26:48.550 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.550 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 01:26:48.550 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.550 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:26:48.550 [ 01:26:48.550 { 01:26:48.550 "aliases": [ 01:26:48.550 "fd53bb3a-eb30-411f-a0ed-3c6febb7765e" 01:26:48.550 ], 01:26:48.550 "assigned_rate_limits": { 01:26:48.550 "r_mbytes_per_sec": 0, 01:26:48.550 "rw_ios_per_sec": 0, 01:26:48.550 "rw_mbytes_per_sec": 0, 01:26:48.550 "w_mbytes_per_sec": 0 01:26:48.550 }, 01:26:48.550 "block_size": 512, 01:26:48.550 "claimed": false, 01:26:48.550 "driver_specific": { 01:26:48.550 "mp_policy": "active_passive", 01:26:48.550 "nvme": [ 01:26:48.550 { 01:26:48.550 "ctrlr_data": { 01:26:48.550 "ana_reporting": false, 01:26:48.550 "cntlid": 2, 01:26:48.550 "firmware_revision": "25.01", 01:26:48.550 "model_number": "SPDK bdev Controller", 01:26:48.550 "multi_ctrlr": true, 01:26:48.550 "oacs": { 01:26:48.550 "firmware": 0, 01:26:48.550 "format": 0, 01:26:48.550 "ns_manage": 0, 01:26:48.550 "security": 0 01:26:48.550 }, 01:26:48.550 "serial_number": "00000000000000000000", 01:26:48.550 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:26:48.550 "vendor_id": "0x8086" 01:26:48.550 }, 01:26:48.550 "ns_data": { 01:26:48.550 "can_share": true, 01:26:48.550 "id": 1 01:26:48.550 }, 01:26:48.550 "trid": { 01:26:48.550 "adrfam": "IPv4", 01:26:48.550 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:26:48.550 "traddr": "10.0.0.3", 01:26:48.550 "trsvcid": "4420", 01:26:48.550 "trtype": "TCP" 01:26:48.550 }, 01:26:48.550 "vs": { 01:26:48.550 "nvme_version": "1.3" 01:26:48.551 } 01:26:48.551 } 01:26:48.551 ] 01:26:48.551 }, 01:26:48.551 "memory_domains": [ 01:26:48.551 { 01:26:48.551 "dma_device_id": "system", 01:26:48.551 "dma_device_type": 1 01:26:48.551 } 01:26:48.551 ], 01:26:48.551 "name": "nvme0n1", 01:26:48.551 "num_blocks": 2097152, 01:26:48.551 "numa_id": -1, 01:26:48.551 "product_name": "NVMe disk", 01:26:48.551 "supported_io_types": { 01:26:48.551 "abort": true, 01:26:48.551 "compare": true, 01:26:48.551 "compare_and_write": true, 01:26:48.551 "copy": true, 01:26:48.551 "flush": true, 01:26:48.551 "get_zone_info": false, 01:26:48.551 "nvme_admin": true, 01:26:48.551 "nvme_io": true, 01:26:48.551 "nvme_io_md": false, 01:26:48.551 "nvme_iov_md": false, 01:26:48.551 "read": true, 01:26:48.551 "reset": true, 01:26:48.551 "seek_data": false, 01:26:48.551 "seek_hole": false, 01:26:48.551 "unmap": false, 01:26:48.551 "write": true, 01:26:48.551 "write_zeroes": true, 01:26:48.551 "zcopy": false, 01:26:48.551 "zone_append": false, 01:26:48.551 "zone_management": false 01:26:48.551 }, 01:26:48.551 "uuid": "fd53bb3a-eb30-411f-a0ed-3c6febb7765e", 01:26:48.551 "zoned": false 01:26:48.551 } 01:26:48.551 ] 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.mIJRuPW2Vi 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.mIJRuPW2Vi 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.mIJRuPW2Vi 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:26:48.551 [2024-12-09 05:21:40.151718] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:26:48.551 [2024-12-09 05:21:40.151830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.551 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 01:26:48.809 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.809 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:26:48.809 [2024-12-09 05:21:40.167735] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:26:48.809 nvme0n1 01:26:48.809 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.809 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 01:26:48.809 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.809 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:26:48.809 [ 01:26:48.809 { 01:26:48.809 "aliases": [ 01:26:48.809 "fd53bb3a-eb30-411f-a0ed-3c6febb7765e" 01:26:48.809 ], 01:26:48.809 "assigned_rate_limits": { 01:26:48.809 "r_mbytes_per_sec": 0, 01:26:48.809 "rw_ios_per_sec": 0, 01:26:48.809 "rw_mbytes_per_sec": 0, 01:26:48.809 "w_mbytes_per_sec": 0 01:26:48.809 }, 01:26:48.809 "block_size": 512, 01:26:48.809 "claimed": false, 01:26:48.809 "driver_specific": { 01:26:48.809 "mp_policy": "active_passive", 01:26:48.809 "nvme": [ 01:26:48.809 { 01:26:48.809 "ctrlr_data": { 01:26:48.809 "ana_reporting": false, 01:26:48.809 "cntlid": 3, 01:26:48.809 "firmware_revision": "25.01", 01:26:48.809 "model_number": "SPDK bdev Controller", 01:26:48.809 "multi_ctrlr": true, 01:26:48.809 "oacs": { 01:26:48.809 "firmware": 0, 01:26:48.809 "format": 0, 01:26:48.809 "ns_manage": 0, 01:26:48.809 "security": 0 01:26:48.809 }, 01:26:48.809 "serial_number": "00000000000000000000", 01:26:48.809 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:26:48.809 "vendor_id": "0x8086" 01:26:48.809 }, 01:26:48.809 "ns_data": { 01:26:48.809 "can_share": true, 01:26:48.809 "id": 1 01:26:48.809 }, 01:26:48.809 "trid": { 01:26:48.809 "adrfam": "IPv4", 01:26:48.809 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:26:48.809 "traddr": "10.0.0.3", 01:26:48.809 "trsvcid": "4421", 01:26:48.809 "trtype": "TCP" 01:26:48.809 }, 01:26:48.809 "vs": { 01:26:48.809 "nvme_version": "1.3" 01:26:48.809 } 01:26:48.809 } 01:26:48.809 ] 01:26:48.809 }, 01:26:48.809 "memory_domains": [ 01:26:48.809 { 01:26:48.809 "dma_device_id": "system", 01:26:48.809 "dma_device_type": 1 01:26:48.809 } 01:26:48.809 ], 01:26:48.809 "name": "nvme0n1", 01:26:48.809 "num_blocks": 2097152, 01:26:48.809 "numa_id": -1, 01:26:48.809 "product_name": "NVMe disk", 01:26:48.809 "supported_io_types": { 01:26:48.809 "abort": true, 01:26:48.809 "compare": true, 01:26:48.809 "compare_and_write": true, 01:26:48.809 "copy": true, 01:26:48.809 "flush": true, 01:26:48.809 "get_zone_info": false, 01:26:48.809 "nvme_admin": true, 01:26:48.809 "nvme_io": true, 01:26:48.809 "nvme_io_md": false, 01:26:48.809 "nvme_iov_md": false, 01:26:48.809 "read": true, 01:26:48.809 "reset": true, 01:26:48.809 "seek_data": false, 01:26:48.809 "seek_hole": false, 01:26:48.809 "unmap": false, 01:26:48.809 "write": true, 01:26:48.809 "write_zeroes": true, 01:26:48.809 "zcopy": false, 01:26:48.809 "zone_append": false, 01:26:48.809 "zone_management": false 01:26:48.809 }, 01:26:48.809 "uuid": "fd53bb3a-eb30-411f-a0ed-3c6febb7765e", 01:26:48.809 "zoned": false 01:26:48.809 } 01:26:48.809 ] 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.mIJRuPW2Vi 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:26:48.810 rmmod nvme_tcp 01:26:48.810 rmmod nvme_fabrics 01:26:48.810 rmmod nvme_keyring 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 85958 ']' 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 85958 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 85958 ']' 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 85958 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85958 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:48.810 killing process with pid 85958 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85958' 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 85958 01:26:48.810 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 85958 01:26:49.067 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:26:49.067 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:26:49.067 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:26:49.067 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 01:26:49.067 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:26:49.067 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 01:26:49.067 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 01:26:49.067 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:26:49.067 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:26:49.067 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:26:49.067 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:26:49.067 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:26:49.067 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:26:49.067 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:26:49.067 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:26:49.067 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:26:49.068 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:26:49.068 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:26:49.068 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:26:49.325 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:26:49.325 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:49.325 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:49.325 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns 01:26:49.325 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:49.325 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:49.325 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:49.325 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0 01:26:49.325 01:26:49.325 real 0m2.648s 01:26:49.325 user 0m2.240s 01:26:49.325 sys 0m0.567s 01:26:49.325 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:49.325 ************************************ 01:26:49.325 END TEST nvmf_async_init 01:26:49.325 ************************************ 01:26:49.325 05:21:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:26:49.325 05:21:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 01:26:49.325 05:21:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:26:49.325 05:21:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:49.325 05:21:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.325 ************************************ 01:26:49.325 START TEST dma 01:26:49.325 ************************************ 01:26:49.325 05:21:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 01:26:49.325 * Looking for test storage... 01:26:49.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:26:49.325 05:21:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:26:49.325 05:21:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 01:26:49.325 05:21:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:26:49.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:49.585 --rc genhtml_branch_coverage=1 01:26:49.585 --rc genhtml_function_coverage=1 01:26:49.585 --rc genhtml_legend=1 01:26:49.585 --rc geninfo_all_blocks=1 01:26:49.585 --rc geninfo_unexecuted_blocks=1 01:26:49.585 01:26:49.585 ' 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:26:49.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:49.585 --rc genhtml_branch_coverage=1 01:26:49.585 --rc genhtml_function_coverage=1 01:26:49.585 --rc genhtml_legend=1 01:26:49.585 --rc geninfo_all_blocks=1 01:26:49.585 --rc geninfo_unexecuted_blocks=1 01:26:49.585 01:26:49.585 ' 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:26:49.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:49.585 --rc genhtml_branch_coverage=1 01:26:49.585 --rc genhtml_function_coverage=1 01:26:49.585 --rc genhtml_legend=1 01:26:49.585 --rc geninfo_all_blocks=1 01:26:49.585 --rc geninfo_unexecuted_blocks=1 01:26:49.585 01:26:49.585 ' 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:26:49.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:49.585 --rc genhtml_branch_coverage=1 01:26:49.585 --rc genhtml_function_coverage=1 01:26:49.585 --rc genhtml_legend=1 01:26:49.585 --rc geninfo_all_blocks=1 01:26:49.585 --rc geninfo_unexecuted_blocks=1 01:26:49.585 01:26:49.585 ' 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:26:49.585 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:26:49.585 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:26:49.586 05:21:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 01:26:49.586 05:21:40 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 01:26:49.586 05:21:40 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 01:26:49.586 01:26:49.586 real 0m0.150s 01:26:49.586 user 0m0.095s 01:26:49.586 sys 0m0.058s 01:26:49.586 05:21:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:49.586 ************************************ 01:26:49.586 END TEST dma 01:26:49.586 ************************************ 01:26:49.586 05:21:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.586 ************************************ 01:26:49.586 START TEST nvmf_identify 01:26:49.586 ************************************ 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 01:26:49.586 * Looking for test storage... 01:26:49.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:26:49.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:49.586 --rc genhtml_branch_coverage=1 01:26:49.586 --rc genhtml_function_coverage=1 01:26:49.586 --rc genhtml_legend=1 01:26:49.586 --rc geninfo_all_blocks=1 01:26:49.586 --rc geninfo_unexecuted_blocks=1 01:26:49.586 01:26:49.586 ' 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:26:49.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:49.586 --rc genhtml_branch_coverage=1 01:26:49.586 --rc genhtml_function_coverage=1 01:26:49.586 --rc genhtml_legend=1 01:26:49.586 --rc geninfo_all_blocks=1 01:26:49.586 --rc geninfo_unexecuted_blocks=1 01:26:49.586 01:26:49.586 ' 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:26:49.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:49.586 --rc genhtml_branch_coverage=1 01:26:49.586 --rc genhtml_function_coverage=1 01:26:49.586 --rc genhtml_legend=1 01:26:49.586 --rc geninfo_all_blocks=1 01:26:49.586 --rc geninfo_unexecuted_blocks=1 01:26:49.586 01:26:49.586 ' 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:26:49.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:49.586 --rc genhtml_branch_coverage=1 01:26:49.586 --rc genhtml_function_coverage=1 01:26:49.586 --rc genhtml_legend=1 01:26:49.586 --rc geninfo_all_blocks=1 01:26:49.586 --rc geninfo_unexecuted_blocks=1 01:26:49.586 01:26:49.586 ' 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:49.586 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 01:26:49.587 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:26:49.587 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:26:49.587 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:26:49.587 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:26:49.587 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:26:49.587 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:26:49.587 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:26:49.587 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:26:49.587 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:26:49.587 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:26:49.845 Cannot find device "nvmf_init_br" 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:26:49.845 Cannot find device "nvmf_init_br2" 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:26:49.845 Cannot find device "nvmf_tgt_br" 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:26:49.845 Cannot find device "nvmf_tgt_br2" 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:26:49.845 Cannot find device "nvmf_init_br" 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:26:49.845 Cannot find device "nvmf_init_br2" 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:26:49.845 Cannot find device "nvmf_tgt_br" 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:26:49.845 Cannot find device "nvmf_tgt_br2" 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:26:49.845 Cannot find device "nvmf_br" 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:26:49.845 Cannot find device "nvmf_init_if" 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:26:49.845 Cannot find device "nvmf_init_if2" 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:49.845 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:49.845 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:26:49.845 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:26:49.846 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:26:49.846 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:26:49.846 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:26:49.846 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:26:49.846 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:26:49.846 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:26:49.846 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:26:49.846 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:26:49.846 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:26:49.846 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:26:49.846 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:26:49.846 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:26:49.846 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:26:49.846 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:26:49.846 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:26:50.103 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:26:50.103 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 01:26:50.103 01:26:50.103 --- 10.0.0.3 ping statistics --- 01:26:50.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:50.103 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:26:50.103 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:26:50.103 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.031 ms 01:26:50.103 01:26:50.103 --- 10.0.0.4 ping statistics --- 01:26:50.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:50.103 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:26:50.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:26:50.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 01:26:50.103 01:26:50.103 --- 10.0.0.1 ping statistics --- 01:26:50.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:50.103 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:26:50.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:26:50.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 01:26:50.103 01:26:50.103 --- 10.0.0.2 ping statistics --- 01:26:50.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:50.103 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86280 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86280 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 86280 ']' 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:50.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:50.103 05:21:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:26:50.103 [2024-12-09 05:21:41.543794] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:50.103 [2024-12-09 05:21:41.543850] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:26:50.103 [2024-12-09 05:21:41.682887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:26:50.361 [2024-12-09 05:21:41.720435] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:26:50.361 [2024-12-09 05:21:41.720472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:26:50.361 [2024-12-09 05:21:41.720478] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:26:50.361 [2024-12-09 05:21:41.720483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:26:50.361 [2024-12-09 05:21:41.720488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:26:50.361 [2024-12-09 05:21:41.721311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:26:50.361 [2024-12-09 05:21:41.721626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:26:50.361 [2024-12-09 05:21:41.721944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:26:50.361 [2024-12-09 05:21:41.722103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:26:50.927 [2024-12-09 05:21:42.431031] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:26:50.927 Malloc0 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:26:50.927 [2024-12-09 05:21:42.526466] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.927 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:26:51.186 [ 01:26:51.186 { 01:26:51.186 "allow_any_host": true, 01:26:51.186 "hosts": [], 01:26:51.186 "listen_addresses": [ 01:26:51.186 { 01:26:51.186 "adrfam": "IPv4", 01:26:51.186 "traddr": "10.0.0.3", 01:26:51.186 "trsvcid": "4420", 01:26:51.186 "trtype": "TCP" 01:26:51.186 } 01:26:51.186 ], 01:26:51.186 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:26:51.186 "subtype": "Discovery" 01:26:51.186 }, 01:26:51.186 { 01:26:51.186 "allow_any_host": true, 01:26:51.186 "hosts": [], 01:26:51.186 "listen_addresses": [ 01:26:51.186 { 01:26:51.186 "adrfam": "IPv4", 01:26:51.186 "traddr": "10.0.0.3", 01:26:51.186 "trsvcid": "4420", 01:26:51.186 "trtype": "TCP" 01:26:51.186 } 01:26:51.186 ], 01:26:51.186 "max_cntlid": 65519, 01:26:51.186 "max_namespaces": 32, 01:26:51.186 "min_cntlid": 1, 01:26:51.186 "model_number": "SPDK bdev Controller", 01:26:51.186 "namespaces": [ 01:26:51.186 { 01:26:51.186 "bdev_name": "Malloc0", 01:26:51.186 "eui64": "ABCDEF0123456789", 01:26:51.186 "name": "Malloc0", 01:26:51.186 "nguid": "ABCDEF0123456789ABCDEF0123456789", 01:26:51.186 "nsid": 1, 01:26:51.186 "uuid": "5bb11e01-27f0-4610-8ebf-14c64bf72e61" 01:26:51.186 } 01:26:51.186 ], 01:26:51.186 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:26:51.186 "serial_number": "SPDK00000000000001", 01:26:51.186 "subtype": "NVMe" 01:26:51.186 } 01:26:51.186 ] 01:26:51.186 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:51.186 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 01:26:51.186 [2024-12-09 05:21:42.569201] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:51.186 [2024-12-09 05:21:42.569237] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86333 ] 01:26:51.186 [2024-12-09 05:21:42.721250] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 01:26:51.186 [2024-12-09 05:21:42.721309] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 01:26:51.186 [2024-12-09 05:21:42.721313] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 01:26:51.186 [2024-12-09 05:21:42.721323] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 01:26:51.186 [2024-12-09 05:21:42.721331] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 01:26:51.186 [2024-12-09 05:21:42.721519] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 01:26:51.186 [2024-12-09 05:21:42.721549] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd52d90 0 01:26:51.186 [2024-12-09 05:21:42.728581] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 01:26:51.186 [2024-12-09 05:21:42.728597] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 01:26:51.186 [2024-12-09 05:21:42.728601] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 01:26:51.186 [2024-12-09 05:21:42.728603] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 01:26:51.186 [2024-12-09 05:21:42.728630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.186 [2024-12-09 05:21:42.728634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.186 [2024-12-09 05:21:42.728638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd52d90) 01:26:51.186 [2024-12-09 05:21:42.728649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 01:26:51.186 [2024-12-09 05:21:42.728671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93600, cid 0, qid 0 01:26:51.186 [2024-12-09 05:21:42.736574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.186 [2024-12-09 05:21:42.736588] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.186 [2024-12-09 05:21:42.736590] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.186 [2024-12-09 05:21:42.736594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93600) on tqpair=0xd52d90 01:26:51.186 [2024-12-09 05:21:42.736604] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 01:26:51.186 [2024-12-09 05:21:42.736610] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 01:26:51.186 [2024-12-09 05:21:42.736614] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 01:26:51.186 [2024-12-09 05:21:42.736626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.186 [2024-12-09 05:21:42.736629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.186 [2024-12-09 05:21:42.736631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd52d90) 01:26:51.186 [2024-12-09 05:21:42.736638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.186 [2024-12-09 05:21:42.736655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93600, cid 0, qid 0 01:26:51.186 [2024-12-09 05:21:42.736707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.186 [2024-12-09 05:21:42.736712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.186 [2024-12-09 05:21:42.736714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.186 [2024-12-09 05:21:42.736717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93600) on tqpair=0xd52d90 01:26:51.186 [2024-12-09 05:21:42.736721] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 01:26:51.186 [2024-12-09 05:21:42.736726] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 01:26:51.186 [2024-12-09 05:21:42.736731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.186 [2024-12-09 05:21:42.736733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.186 [2024-12-09 05:21:42.736736] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd52d90) 01:26:51.186 [2024-12-09 05:21:42.736742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.186 [2024-12-09 05:21:42.736753] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93600, cid 0, qid 0 01:26:51.186 [2024-12-09 05:21:42.736795] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.186 [2024-12-09 05:21:42.736800] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.186 [2024-12-09 05:21:42.736802] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.186 [2024-12-09 05:21:42.736805] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93600) on tqpair=0xd52d90 01:26:51.186 [2024-12-09 05:21:42.736809] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 01:26:51.186 [2024-12-09 05:21:42.736814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 01:26:51.186 [2024-12-09 05:21:42.736819] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.186 [2024-12-09 05:21:42.736822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.186 [2024-12-09 05:21:42.736824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd52d90) 01:26:51.186 [2024-12-09 05:21:42.736830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.186 [2024-12-09 05:21:42.736840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93600, cid 0, qid 0 01:26:51.186 [2024-12-09 05:21:42.736887] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.186 [2024-12-09 05:21:42.736892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.186 [2024-12-09 05:21:42.736894] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.736896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93600) on tqpair=0xd52d90 01:26:51.187 [2024-12-09 05:21:42.736900] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 01:26:51.187 [2024-12-09 05:21:42.736907] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.736909] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.736912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd52d90) 01:26:51.187 [2024-12-09 05:21:42.736917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.187 [2024-12-09 05:21:42.736927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93600, cid 0, qid 0 01:26:51.187 [2024-12-09 05:21:42.736970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.187 [2024-12-09 05:21:42.736975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.187 [2024-12-09 05:21:42.736977] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.736979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93600) on tqpair=0xd52d90 01:26:51.187 [2024-12-09 05:21:42.736983] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 01:26:51.187 [2024-12-09 05:21:42.736987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 01:26:51.187 [2024-12-09 05:21:42.736992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 01:26:51.187 [2024-12-09 05:21:42.737095] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 01:26:51.187 [2024-12-09 05:21:42.737098] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 01:26:51.187 [2024-12-09 05:21:42.737105] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd52d90) 01:26:51.187 [2024-12-09 05:21:42.737115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.187 [2024-12-09 05:21:42.737125] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93600, cid 0, qid 0 01:26:51.187 [2024-12-09 05:21:42.737177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.187 [2024-12-09 05:21:42.737181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.187 [2024-12-09 05:21:42.737184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93600) on tqpair=0xd52d90 01:26:51.187 [2024-12-09 05:21:42.737190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 01:26:51.187 [2024-12-09 05:21:42.737196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737201] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd52d90) 01:26:51.187 [2024-12-09 05:21:42.737206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.187 [2024-12-09 05:21:42.737217] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93600, cid 0, qid 0 01:26:51.187 [2024-12-09 05:21:42.737257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.187 [2024-12-09 05:21:42.737262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.187 [2024-12-09 05:21:42.737264] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737267] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93600) on tqpair=0xd52d90 01:26:51.187 [2024-12-09 05:21:42.737270] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 01:26:51.187 [2024-12-09 05:21:42.737273] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 01:26:51.187 [2024-12-09 05:21:42.737279] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 01:26:51.187 [2024-12-09 05:21:42.737284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 01:26:51.187 [2024-12-09 05:21:42.737291] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737294] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd52d90) 01:26:51.187 [2024-12-09 05:21:42.737299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.187 [2024-12-09 05:21:42.737311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93600, cid 0, qid 0 01:26:51.187 [2024-12-09 05:21:42.737384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:26:51.187 [2024-12-09 05:21:42.737388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:26:51.187 [2024-12-09 05:21:42.737391] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737394] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd52d90): datao=0, datal=4096, cccid=0 01:26:51.187 [2024-12-09 05:21:42.737397] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd93600) on tqpair(0xd52d90): expected_datao=0, payload_size=4096 01:26:51.187 [2024-12-09 05:21:42.737400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737407] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737410] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.187 [2024-12-09 05:21:42.737421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.187 [2024-12-09 05:21:42.737423] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737426] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93600) on tqpair=0xd52d90 01:26:51.187 [2024-12-09 05:21:42.737431] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 01:26:51.187 [2024-12-09 05:21:42.737435] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 01:26:51.187 [2024-12-09 05:21:42.737438] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 01:26:51.187 [2024-12-09 05:21:42.737444] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 01:26:51.187 [2024-12-09 05:21:42.737447] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 01:26:51.187 [2024-12-09 05:21:42.737450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 01:26:51.187 [2024-12-09 05:21:42.737456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 01:26:51.187 [2024-12-09 05:21:42.737461] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd52d90) 01:26:51.187 [2024-12-09 05:21:42.737471] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 01:26:51.187 [2024-12-09 05:21:42.737483] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93600, cid 0, qid 0 01:26:51.187 [2024-12-09 05:21:42.737528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.187 [2024-12-09 05:21:42.737533] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.187 [2024-12-09 05:21:42.737535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93600) on tqpair=0xd52d90 01:26:51.187 [2024-12-09 05:21:42.737543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737548] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd52d90) 01:26:51.187 [2024-12-09 05:21:42.737553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:26:51.187 [2024-12-09 05:21:42.737557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737571] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737574] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd52d90) 01:26:51.187 [2024-12-09 05:21:42.737578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:26:51.187 [2024-12-09 05:21:42.737583] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737587] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd52d90) 01:26:51.187 [2024-12-09 05:21:42.737592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:26:51.187 [2024-12-09 05:21:42.737597] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737599] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.187 [2024-12-09 05:21:42.737606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:26:51.187 [2024-12-09 05:21:42.737609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 01:26:51.187 [2024-12-09 05:21:42.737615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 01:26:51.187 [2024-12-09 05:21:42.737619] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd52d90) 01:26:51.187 [2024-12-09 05:21:42.737627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.187 [2024-12-09 05:21:42.737643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93600, cid 0, qid 0 01:26:51.187 [2024-12-09 05:21:42.737647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93780, cid 1, qid 0 01:26:51.187 [2024-12-09 05:21:42.737650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93900, cid 2, qid 0 01:26:51.187 [2024-12-09 05:21:42.737654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.187 [2024-12-09 05:21:42.737657] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93c00, cid 4, qid 0 01:26:51.187 [2024-12-09 05:21:42.737745] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.187 [2024-12-09 05:21:42.737750] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.187 [2024-12-09 05:21:42.737752] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737755] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93c00) on tqpair=0xd52d90 01:26:51.187 [2024-12-09 05:21:42.737758] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 01:26:51.187 [2024-12-09 05:21:42.737762] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 01:26:51.187 [2024-12-09 05:21:42.737770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd52d90) 01:26:51.187 [2024-12-09 05:21:42.737777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.187 [2024-12-09 05:21:42.737788] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93c00, cid 4, qid 0 01:26:51.187 [2024-12-09 05:21:42.737836] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:26:51.187 [2024-12-09 05:21:42.737854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:26:51.187 [2024-12-09 05:21:42.737857] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737859] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd52d90): datao=0, datal=4096, cccid=4 01:26:51.187 [2024-12-09 05:21:42.737862] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd93c00) on tqpair(0xd52d90): expected_datao=0, payload_size=4096 01:26:51.187 [2024-12-09 05:21:42.737865] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737871] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737873] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.187 [2024-12-09 05:21:42.737884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.187 [2024-12-09 05:21:42.737886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.187 [2024-12-09 05:21:42.737889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93c00) on tqpair=0xd52d90 01:26:51.187 [2024-12-09 05:21:42.737898] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 01:26:51.187 [2024-12-09 05:21:42.737914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.188 [2024-12-09 05:21:42.737917] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd52d90) 01:26:51.188 [2024-12-09 05:21:42.737922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.188 [2024-12-09 05:21:42.737928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.188 [2024-12-09 05:21:42.737930] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.188 [2024-12-09 05:21:42.737933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd52d90) 01:26:51.188 [2024-12-09 05:21:42.737937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 01:26:51.188 [2024-12-09 05:21:42.737952] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93c00, cid 4, qid 0 01:26:51.188 [2024-12-09 05:21:42.737956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93d80, cid 5, qid 0 01:26:51.188 [2024-12-09 05:21:42.738040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:26:51.188 [2024-12-09 05:21:42.738045] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:26:51.188 [2024-12-09 05:21:42.738048] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:26:51.188 [2024-12-09 05:21:42.738050] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd52d90): datao=0, datal=1024, cccid=4 01:26:51.188 [2024-12-09 05:21:42.738054] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd93c00) on tqpair(0xd52d90): expected_datao=0, payload_size=1024 01:26:51.188 [2024-12-09 05:21:42.738057] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.188 [2024-12-09 05:21:42.738062] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:26:51.188 [2024-12-09 05:21:42.738064] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:26:51.188 [2024-12-09 05:21:42.738069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.188 [2024-12-09 05:21:42.738073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.188 [2024-12-09 05:21:42.738075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.188 [2024-12-09 05:21:42.738078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93d80) on tqpair=0xd52d90 01:26:51.188 [2024-12-09 05:21:42.779591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.188 [2024-12-09 05:21:42.779606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.188 [2024-12-09 05:21:42.779608] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.188 [2024-12-09 05:21:42.779611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93c00) on tqpair=0xd52d90 01:26:51.188 [2024-12-09 05:21:42.779621] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.188 [2024-12-09 05:21:42.779623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd52d90) 01:26:51.188 [2024-12-09 05:21:42.779629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.188 [2024-12-09 05:21:42.779648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93c00, cid 4, qid 0 01:26:51.188 [2024-12-09 05:21:42.779698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:26:51.188 [2024-12-09 05:21:42.779703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:26:51.188 [2024-12-09 05:21:42.779705] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:26:51.188 [2024-12-09 05:21:42.779708] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd52d90): datao=0, datal=3072, cccid=4 01:26:51.188 [2024-12-09 05:21:42.779711] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd93c00) on tqpair(0xd52d90): expected_datao=0, payload_size=3072 01:26:51.188 [2024-12-09 05:21:42.779714] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.188 [2024-12-09 05:21:42.779719] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:26:51.188 [2024-12-09 05:21:42.779722] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:26:51.188 [2024-12-09 05:21:42.779728] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.188 [2024-12-09 05:21:42.779733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.188 [2024-12-09 05:21:42.779735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.188 [2024-12-09 05:21:42.779737] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93c00) on tqpair=0xd52d90 01:26:51.188 [2024-12-09 05:21:42.779744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.188 [2024-12-09 05:21:42.779746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd52d90) 01:26:51.188 [2024-12-09 05:21:42.779751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.188 [2024-12-09 05:21:42.779765] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93c00, cid 4, qid 0 01:26:51.188 [2024-12-09 05:21:42.779816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:26:51.188 [2024-12-09 05:21:42.779821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:26:51.188 [2024-12-09 05:21:42.779823] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:26:51.188 [2024-12-09 05:21:42.779826] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd52d90): datao=0, datal=8, cccid=4 01:26:51.188 [2024-12-09 05:21:42.779829] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd93c00) on tqpair(0xd52d90): expected_datao=0, payload_size=8 01:26:51.188 [2024-12-09 05:21:42.779832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.188 [2024-12-09 05:21:42.779837] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:26:51.188 [2024-12-09 05:21:42.779839] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:26:51.449 ===================================================== 01:26:51.449 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 01:26:51.449 ===================================================== 01:26:51.449 Controller Capabilities/Features 01:26:51.449 ================================ 01:26:51.449 Vendor ID: 0000 01:26:51.449 Subsystem Vendor ID: 0000 01:26:51.449 Serial Number: .................... 01:26:51.449 Model Number: ........................................ 01:26:51.449 Firmware Version: 25.01 01:26:51.449 Recommended Arb Burst: 0 01:26:51.449 IEEE OUI Identifier: 00 00 00 01:26:51.449 Multi-path I/O 01:26:51.449 May have multiple subsystem ports: No 01:26:51.449 May have multiple controllers: No 01:26:51.449 Associated with SR-IOV VF: No 01:26:51.449 Max Data Transfer Size: 131072 01:26:51.449 Max Number of Namespaces: 0 01:26:51.449 Max Number of I/O Queues: 1024 01:26:51.449 NVMe Specification Version (VS): 1.3 01:26:51.449 NVMe Specification Version (Identify): 1.3 01:26:51.449 Maximum Queue Entries: 128 01:26:51.449 Contiguous Queues Required: Yes 01:26:51.449 Arbitration Mechanisms Supported 01:26:51.449 Weighted Round Robin: Not Supported 01:26:51.449 Vendor Specific: Not Supported 01:26:51.449 Reset Timeout: 15000 ms 01:26:51.449 Doorbell Stride: 4 bytes 01:26:51.449 NVM Subsystem Reset: Not Supported 01:26:51.449 Command Sets Supported 01:26:51.449 NVM Command Set: Supported 01:26:51.449 Boot Partition: Not Supported 01:26:51.449 Memory Page Size Minimum: 4096 bytes 01:26:51.449 Memory Page Size Maximum: 4096 bytes 01:26:51.449 Persistent Memory Region: Not Supported 01:26:51.449 Optional Asynchronous Events Supported 01:26:51.449 Namespace Attribute Notices: Not Supported 01:26:51.449 Firmware Activation Notices: Not Supported 01:26:51.449 ANA Change Notices: Not Supported 01:26:51.449 PLE Aggregate Log Change Notices: Not Supported 01:26:51.449 LBA Status Info Alert Notices: Not Supported 01:26:51.449 EGE Aggregate Log Change Notices: Not Supported 01:26:51.449 Normal NVM Subsystem Shutdown event: Not Supported 01:26:51.449 Zone Descriptor Change Notices: Not Supported 01:26:51.449 Discovery Log Change Notices: Supported 01:26:51.449 Controller Attributes 01:26:51.449 128-bit Host Identifier: Not Supported 01:26:51.449 Non-Operational Permissive Mode: Not Supported 01:26:51.449 NVM Sets: Not Supported 01:26:51.449 Read Recovery Levels: Not Supported 01:26:51.449 Endurance Groups: Not Supported 01:26:51.449 Predictable Latency Mode: Not Supported 01:26:51.449 Traffic Based Keep ALive: Not Supported 01:26:51.449 Namespace Granularity: Not Supported 01:26:51.449 SQ Associations: Not Supported 01:26:51.449 UUID List: Not Supported 01:26:51.449 Multi-Domain Subsystem: Not Supported 01:26:51.449 Fixed Capacity Management: Not Supported 01:26:51.449 Variable Capacity Management: Not Supported 01:26:51.449 Delete Endurance Group: Not Supported 01:26:51.449 Delete NVM Set: Not Supported 01:26:51.449 Extended LBA Formats Supported: Not Supported 01:26:51.449 Flexible Data Placement Supported: Not Supported 01:26:51.449 01:26:51.449 Controller Memory Buffer Support 01:26:51.449 ================================ 01:26:51.449 Supported: No 01:26:51.449 01:26:51.449 Persistent Memory Region Support 01:26:51.449 ================================ 01:26:51.450 Supported: No 01:26:51.450 01:26:51.450 Admin Command Set Attributes 01:26:51.450 ============================ 01:26:51.450 Security Send/Receive: Not Supported 01:26:51.450 Format NVM: Not Supported 01:26:51.450 Firmware Activate/Download: Not Supported 01:26:51.450 Namespace Management: Not Supported 01:26:51.450 Device Self-Test: Not Supported 01:26:51.450 Directives: Not Supported 01:26:51.450 NVMe-MI: Not Supported 01:26:51.450 Virtualization Management: Not Supported 01:26:51.450 Doorbell Buffer Config: Not Supported 01:26:51.450 Get LBA Status Capability: Not Supported 01:26:51.450 Command & Feature Lockdown Capability: Not Supported 01:26:51.450 Abort Command Limit: 1 01:26:51.450 Async Event Request Limit: 4 01:26:51.450 Number of Firmware Slots: N/A 01:26:51.450 Firmware Slot 1 Read-Only: N/A 01:26:51.450 Firmware Activation Without Reset: N/A 01:26:51.450 Multiple Update Detection Support: N/A 01:26:51.450 Firmware Update Granularity: No Information Provided 01:26:51.450 Per-Namespace SMART Log: No 01:26:51.450 Asymmetric Namespace Access Log Page: Not Supported 01:26:51.450 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 01:26:51.450 Command Effects Log Page: Not Supported 01:26:51.450 Get Log Page Extended Data: Supported 01:26:51.450 Telemetry Log Pages: Not Supported 01:26:51.450 Persistent Event Log Pages: Not Supported 01:26:51.450 Supported Log Pages Log Page: May Support 01:26:51.450 Commands Supported & Effects Log Page: Not Supported 01:26:51.450 Feature Identifiers & Effects Log Page:May Support 01:26:51.450 NVMe-MI Commands & Effects Log Page: May Support 01:26:51.450 Data Area 4 for Telemetry Log: Not Supported 01:26:51.450 Error Log Page Entries Supported: 128 01:26:51.450 Keep Alive: Not Supported 01:26:51.450 01:26:51.450 NVM Command Set Attributes 01:26:51.450 ========================== 01:26:51.450 Submission Queue Entry Size 01:26:51.450 Max: 1 01:26:51.450 Min: 1 01:26:51.450 Completion Queue Entry Size 01:26:51.450 Max: 1 01:26:51.450 Min: 1 01:26:51.450 Number of Namespaces: 0 01:26:51.450 Compare Command: Not Supported 01:26:51.450 Write Uncorrectable Command: Not Supported 01:26:51.450 Dataset Management Command: Not Supported 01:26:51.450 Write Zeroes Command: Not Supported 01:26:51.450 Set Features Save Field: Not Supported 01:26:51.450 Reservations: Not Supported 01:26:51.450 Timestamp: Not Supported 01:26:51.450 Copy: Not Supported 01:26:51.450 Volatile Write Cache: Not Present 01:26:51.450 Atomic Write Unit (Normal): 1 01:26:51.450 Atomic Write Unit (PFail): 1 01:26:51.450 Atomic Compare & Write Unit: 1 01:26:51.450 Fused Compare & Write: Supported 01:26:51.450 Scatter-Gather List 01:26:51.450 SGL Command Set: Supported 01:26:51.450 SGL Keyed: Supported 01:26:51.450 SGL Bit Bucket Descriptor: Not Supported 01:26:51.450 SGL Metadata Pointer: Not Supported 01:26:51.450 Oversized SGL: Not Supported 01:26:51.450 SGL Metadata Address: Not Supported 01:26:51.450 SGL Offset: Supported 01:26:51.450 Transport SGL Data Block: Not Supported 01:26:51.450 Replay Protected Memory Block: Not Supported 01:26:51.450 01:26:51.450 Firmware Slot Information 01:26:51.450 ========================= 01:26:51.450 Active slot: 0 01:26:51.450 01:26:51.450 01:26:51.450 Error Log 01:26:51.450 ========= 01:26:51.450 01:26:51.450 Active Namespaces 01:26:51.450 ================= 01:26:51.450 Discovery Log Page 01:26:51.450 ================== 01:26:51.450 Generation Counter: 2 01:26:51.450 Number of Records: 2 01:26:51.450 Record Format: 0 01:26:51.450 01:26:51.450 Discovery Log Entry 0 01:26:51.450 ---------------------- 01:26:51.450 Transport Type: 3 (TCP) 01:26:51.450 Address Family: 1 (IPv4) 01:26:51.450 Subsystem Type: 3 (Current Discovery Subsystem) 01:26:51.450 Entry Flags: 01:26:51.450 Duplicate Returned Information: 1 01:26:51.450 Explicit Persistent Connection Support for Discovery: 1 01:26:51.450 Transport Requirements: 01:26:51.450 Secure Channel: Not Required 01:26:51.450 Port ID: 0 (0x0000) 01:26:51.450 Controller ID: 65535 (0xffff) 01:26:51.450 Admin Max SQ Size: 128 01:26:51.450 Transport Service Identifier: 4420 01:26:51.450 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 01:26:51.450 Transport Address: 10.0.0.3 01:26:51.450 Discovery Log Entry 1 01:26:51.450 ---------------------- 01:26:51.450 Transport Type: 3 (TCP) 01:26:51.450 Address Family: 1 (IPv4) 01:26:51.450 Subsystem Type: 2 (NVM Subsystem) 01:26:51.450 Entry Flags: 01:26:51.450 Duplicate Returned Information: 0 01:26:51.450 Explicit Persistent Connection Support for Discovery: 0 01:26:51.450 Transport Requirements: 01:26:51.450 Secure Channel: Not Required 01:26:51.450 Port ID: 0 (0x0000) 01:26:51.450 Controller ID: 65535 (0xffff) 01:26:51.450 Admin Max SQ Size: 128 01:26:51.450 Transport Service Identifier: 4420 01:26:51.450 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 01:26:51.450 Transport Address: 10.0.0.3 [2024-12-09 05:21:42.824581] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.450 [2024-12-09 05:21:42.824595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.450 [2024-12-09 05:21:42.824598] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.450 [2024-12-09 05:21:42.824601] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93c00) on tqpair=0xd52d90 01:26:51.450 [2024-12-09 05:21:42.824674] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 01:26:51.450 [2024-12-09 05:21:42.824682] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93600) on tqpair=0xd52d90 01:26:51.450 [2024-12-09 05:21:42.824687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:26:51.450 [2024-12-09 05:21:42.824691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93780) on tqpair=0xd52d90 01:26:51.450 [2024-12-09 05:21:42.824694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:26:51.450 [2024-12-09 05:21:42.824698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93900) on tqpair=0xd52d90 01:26:51.450 [2024-12-09 05:21:42.824701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:26:51.450 [2024-12-09 05:21:42.824705] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.450 [2024-12-09 05:21:42.824708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:26:51.450 [2024-12-09 05:21:42.824716] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.450 [2024-12-09 05:21:42.824719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.450 [2024-12-09 05:21:42.824722] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.450 [2024-12-09 05:21:42.824727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.450 [2024-12-09 05:21:42.824744] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.450 [2024-12-09 05:21:42.824786] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.450 [2024-12-09 05:21:42.824791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.450 [2024-12-09 05:21:42.824793] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.450 [2024-12-09 05:21:42.824796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.450 [2024-12-09 05:21:42.824801] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.450 [2024-12-09 05:21:42.824804] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.450 [2024-12-09 05:21:42.824807] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.450 [2024-12-09 05:21:42.824812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.450 [2024-12-09 05:21:42.824825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.450 [2024-12-09 05:21:42.824876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.450 [2024-12-09 05:21:42.824881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.450 [2024-12-09 05:21:42.824883] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.450 [2024-12-09 05:21:42.824886] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.450 [2024-12-09 05:21:42.824889] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 01:26:51.450 [2024-12-09 05:21:42.824892] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 01:26:51.450 [2024-12-09 05:21:42.824899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.450 [2024-12-09 05:21:42.824902] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.824904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.451 [2024-12-09 05:21:42.824909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.451 [2024-12-09 05:21:42.824920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.451 [2024-12-09 05:21:42.824959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.451 [2024-12-09 05:21:42.824963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.451 [2024-12-09 05:21:42.824966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.824968] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.451 [2024-12-09 05:21:42.824976] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.824979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.824981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.451 [2024-12-09 05:21:42.824986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.451 [2024-12-09 05:21:42.824996] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.451 [2024-12-09 05:21:42.825045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.451 [2024-12-09 05:21:42.825050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.451 [2024-12-09 05:21:42.825052] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825055] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.451 [2024-12-09 05:21:42.825062] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.451 [2024-12-09 05:21:42.825073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.451 [2024-12-09 05:21:42.825083] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.451 [2024-12-09 05:21:42.825124] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.451 [2024-12-09 05:21:42.825129] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.451 [2024-12-09 05:21:42.825131] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.451 [2024-12-09 05:21:42.825141] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825146] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.451 [2024-12-09 05:21:42.825151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.451 [2024-12-09 05:21:42.825162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.451 [2024-12-09 05:21:42.825201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.451 [2024-12-09 05:21:42.825206] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.451 [2024-12-09 05:21:42.825208] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.451 [2024-12-09 05:21:42.825218] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825220] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825223] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.451 [2024-12-09 05:21:42.825228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.451 [2024-12-09 05:21:42.825238] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.451 [2024-12-09 05:21:42.825285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.451 [2024-12-09 05:21:42.825289] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.451 [2024-12-09 05:21:42.825291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.451 [2024-12-09 05:21:42.825302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825304] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825306] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.451 [2024-12-09 05:21:42.825312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.451 [2024-12-09 05:21:42.825322] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.451 [2024-12-09 05:21:42.825366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.451 [2024-12-09 05:21:42.825370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.451 [2024-12-09 05:21:42.825373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.451 [2024-12-09 05:21:42.825383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.451 [2024-12-09 05:21:42.825393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.451 [2024-12-09 05:21:42.825403] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.451 [2024-12-09 05:21:42.825444] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.451 [2024-12-09 05:21:42.825448] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.451 [2024-12-09 05:21:42.825450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825453] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.451 [2024-12-09 05:21:42.825461] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.451 [2024-12-09 05:21:42.825471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.451 [2024-12-09 05:21:42.825481] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.451 [2024-12-09 05:21:42.825524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.451 [2024-12-09 05:21:42.825529] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.451 [2024-12-09 05:21:42.825531] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.451 [2024-12-09 05:21:42.825542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.451 [2024-12-09 05:21:42.825552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.451 [2024-12-09 05:21:42.825572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.451 [2024-12-09 05:21:42.825612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.451 [2024-12-09 05:21:42.825617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.451 [2024-12-09 05:21:42.825619] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825622] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.451 [2024-12-09 05:21:42.825629] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.451 [2024-12-09 05:21:42.825639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.451 [2024-12-09 05:21:42.825650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.451 [2024-12-09 05:21:42.825691] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.451 [2024-12-09 05:21:42.825696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.451 [2024-12-09 05:21:42.825698] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.451 [2024-12-09 05:21:42.825708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825713] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.451 [2024-12-09 05:21:42.825718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.451 [2024-12-09 05:21:42.825729] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.451 [2024-12-09 05:21:42.825767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.451 [2024-12-09 05:21:42.825772] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.451 [2024-12-09 05:21:42.825774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.451 [2024-12-09 05:21:42.825784] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825787] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.451 [2024-12-09 05:21:42.825789] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.451 [2024-12-09 05:21:42.825794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.451 [2024-12-09 05:21:42.825804] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.451 [2024-12-09 05:21:42.825846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.452 [2024-12-09 05:21:42.825851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.452 [2024-12-09 05:21:42.825853] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.825856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.452 [2024-12-09 05:21:42.825863] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.825866] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.825868] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.452 [2024-12-09 05:21:42.825873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.452 [2024-12-09 05:21:42.825883] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.452 [2024-12-09 05:21:42.825925] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.452 [2024-12-09 05:21:42.825930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.452 [2024-12-09 05:21:42.825932] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.825935] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.452 [2024-12-09 05:21:42.825942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.825945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.825947] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.452 [2024-12-09 05:21:42.825952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.452 [2024-12-09 05:21:42.825963] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.452 [2024-12-09 05:21:42.826002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.452 [2024-12-09 05:21:42.826007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.452 [2024-12-09 05:21:42.826009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826011] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.452 [2024-12-09 05:21:42.826019] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.452 [2024-12-09 05:21:42.826029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.452 [2024-12-09 05:21:42.826039] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.452 [2024-12-09 05:21:42.826079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.452 [2024-12-09 05:21:42.826084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.452 [2024-12-09 05:21:42.826087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826089] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.452 [2024-12-09 05:21:42.826097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826100] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826102] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.452 [2024-12-09 05:21:42.826108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.452 [2024-12-09 05:21:42.826118] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.452 [2024-12-09 05:21:42.826156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.452 [2024-12-09 05:21:42.826161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.452 [2024-12-09 05:21:42.826163] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826166] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.452 [2024-12-09 05:21:42.826174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826176] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826179] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.452 [2024-12-09 05:21:42.826184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.452 [2024-12-09 05:21:42.826194] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.452 [2024-12-09 05:21:42.826236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.452 [2024-12-09 05:21:42.826240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.452 [2024-12-09 05:21:42.826243] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826245] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.452 [2024-12-09 05:21:42.826253] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826258] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.452 [2024-12-09 05:21:42.826263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.452 [2024-12-09 05:21:42.826274] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.452 [2024-12-09 05:21:42.826312] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.452 [2024-12-09 05:21:42.826317] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.452 [2024-12-09 05:21:42.826319] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.452 [2024-12-09 05:21:42.826329] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826332] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826334] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.452 [2024-12-09 05:21:42.826340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.452 [2024-12-09 05:21:42.826350] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.452 [2024-12-09 05:21:42.826389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.452 [2024-12-09 05:21:42.826399] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.452 [2024-12-09 05:21:42.826401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.452 [2024-12-09 05:21:42.826411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826417] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.452 [2024-12-09 05:21:42.826422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.452 [2024-12-09 05:21:42.826432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.452 [2024-12-09 05:21:42.826474] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.452 [2024-12-09 05:21:42.826480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.452 [2024-12-09 05:21:42.826482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.452 [2024-12-09 05:21:42.826492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.452 [2024-12-09 05:21:42.826503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.452 [2024-12-09 05:21:42.826514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.452 [2024-12-09 05:21:42.826550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.452 [2024-12-09 05:21:42.826556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.452 [2024-12-09 05:21:42.826559] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.452 [2024-12-09 05:21:42.826577] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826582] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.452 [2024-12-09 05:21:42.826587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.452 [2024-12-09 05:21:42.826598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.452 [2024-12-09 05:21:42.826645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.452 [2024-12-09 05:21:42.826654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.452 [2024-12-09 05:21:42.826656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.452 [2024-12-09 05:21:42.826666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.452 [2024-12-09 05:21:42.826677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.452 [2024-12-09 05:21:42.826688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.452 [2024-12-09 05:21:42.826725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.452 [2024-12-09 05:21:42.826730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.452 [2024-12-09 05:21:42.826732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826734] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.452 [2024-12-09 05:21:42.826742] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.452 [2024-12-09 05:21:42.826747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.453 [2024-12-09 05:21:42.826752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.453 [2024-12-09 05:21:42.826762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.453 [2024-12-09 05:21:42.826801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.453 [2024-12-09 05:21:42.826805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.453 [2024-12-09 05:21:42.826808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.826811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.453 [2024-12-09 05:21:42.826818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.826821] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.826823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.453 [2024-12-09 05:21:42.826828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.453 [2024-12-09 05:21:42.826839] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.453 [2024-12-09 05:21:42.826877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.453 [2024-12-09 05:21:42.826882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.453 [2024-12-09 05:21:42.826884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.826887] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.453 [2024-12-09 05:21:42.826895] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.826897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.826900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.453 [2024-12-09 05:21:42.826905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.453 [2024-12-09 05:21:42.826915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.453 [2024-12-09 05:21:42.826954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.453 [2024-12-09 05:21:42.826959] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.453 [2024-12-09 05:21:42.826962] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.826965] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.453 [2024-12-09 05:21:42.826973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.826975] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.826978] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.453 [2024-12-09 05:21:42.826983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.453 [2024-12-09 05:21:42.826993] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.453 [2024-12-09 05:21:42.827038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.453 [2024-12-09 05:21:42.827044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.453 [2024-12-09 05:21:42.827046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.453 [2024-12-09 05:21:42.827057] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.453 [2024-12-09 05:21:42.827067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.453 [2024-12-09 05:21:42.827078] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.453 [2024-12-09 05:21:42.827122] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.453 [2024-12-09 05:21:42.827127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.453 [2024-12-09 05:21:42.827129] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827132] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.453 [2024-12-09 05:21:42.827139] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827142] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827144] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.453 [2024-12-09 05:21:42.827149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.453 [2024-12-09 05:21:42.827160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.453 [2024-12-09 05:21:42.827204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.453 [2024-12-09 05:21:42.827208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.453 [2024-12-09 05:21:42.827211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.453 [2024-12-09 05:21:42.827221] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827223] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827225] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.453 [2024-12-09 05:21:42.827231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.453 [2024-12-09 05:21:42.827241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.453 [2024-12-09 05:21:42.827282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.453 [2024-12-09 05:21:42.827287] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.453 [2024-12-09 05:21:42.827289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.453 [2024-12-09 05:21:42.827300] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827302] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.453 [2024-12-09 05:21:42.827310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.453 [2024-12-09 05:21:42.827320] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.453 [2024-12-09 05:21:42.827362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.453 [2024-12-09 05:21:42.827366] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.453 [2024-12-09 05:21:42.827369] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827371] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.453 [2024-12-09 05:21:42.827379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827381] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.453 [2024-12-09 05:21:42.827389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.453 [2024-12-09 05:21:42.827399] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.453 [2024-12-09 05:21:42.827443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.453 [2024-12-09 05:21:42.827448] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.453 [2024-12-09 05:21:42.827450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827453] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.453 [2024-12-09 05:21:42.827460] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827465] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.453 [2024-12-09 05:21:42.827470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.453 [2024-12-09 05:21:42.827481] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.453 [2024-12-09 05:21:42.827520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.453 [2024-12-09 05:21:42.827524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.453 [2024-12-09 05:21:42.827527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.453 [2024-12-09 05:21:42.827536] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827539] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.453 [2024-12-09 05:21:42.827547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.453 [2024-12-09 05:21:42.827557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.453 [2024-12-09 05:21:42.827602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.453 [2024-12-09 05:21:42.827608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.453 [2024-12-09 05:21:42.827610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827613] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.453 [2024-12-09 05:21:42.827620] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.453 [2024-12-09 05:21:42.827625] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.453 [2024-12-09 05:21:42.827631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.453 [2024-12-09 05:21:42.827642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.453 [2024-12-09 05:21:42.827688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.453 [2024-12-09 05:21:42.827693] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.454 [2024-12-09 05:21:42.827696] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.827699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.454 [2024-12-09 05:21:42.827706] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.827708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.827711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.454 [2024-12-09 05:21:42.827716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.454 [2024-12-09 05:21:42.827727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.454 [2024-12-09 05:21:42.827765] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.454 [2024-12-09 05:21:42.827770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.454 [2024-12-09 05:21:42.827772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.827775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.454 [2024-12-09 05:21:42.827782] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.827785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.827787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.454 [2024-12-09 05:21:42.827792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.454 [2024-12-09 05:21:42.827803] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.454 [2024-12-09 05:21:42.827846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.454 [2024-12-09 05:21:42.827851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.454 [2024-12-09 05:21:42.827853] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.827856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.454 [2024-12-09 05:21:42.827863] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.827866] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.827868] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.454 [2024-12-09 05:21:42.827873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.454 [2024-12-09 05:21:42.827884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.454 [2024-12-09 05:21:42.827926] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.454 [2024-12-09 05:21:42.827930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.454 [2024-12-09 05:21:42.827933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.827935] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.454 [2024-12-09 05:21:42.827943] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.827945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.827948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.454 [2024-12-09 05:21:42.827953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.454 [2024-12-09 05:21:42.827963] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.454 [2024-12-09 05:21:42.828007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.454 [2024-12-09 05:21:42.828012] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.454 [2024-12-09 05:21:42.828014] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.828016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.454 [2024-12-09 05:21:42.828024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.828026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.828029] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.454 [2024-12-09 05:21:42.828034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.454 [2024-12-09 05:21:42.828044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.454 [2024-12-09 05:21:42.828085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.454 [2024-12-09 05:21:42.828090] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.454 [2024-12-09 05:21:42.828093] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.828095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.454 [2024-12-09 05:21:42.828103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.828106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.828108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.454 [2024-12-09 05:21:42.828113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.454 [2024-12-09 05:21:42.828124] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.454 [2024-12-09 05:21:42.828165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.454 [2024-12-09 05:21:42.828170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.454 [2024-12-09 05:21:42.828173] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.828175] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.454 [2024-12-09 05:21:42.828183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.828185] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.828188] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.454 [2024-12-09 05:21:42.828193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.454 [2024-12-09 05:21:42.828204] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.454 [2024-12-09 05:21:42.828257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.454 [2024-12-09 05:21:42.828262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.454 [2024-12-09 05:21:42.828265] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.828267] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.454 [2024-12-09 05:21:42.828275] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.828277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.828280] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.454 [2024-12-09 05:21:42.828285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.454 [2024-12-09 05:21:42.828295] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.454 [2024-12-09 05:21:42.828337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.454 [2024-12-09 05:21:42.828342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.454 [2024-12-09 05:21:42.828345] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.828347] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.454 [2024-12-09 05:21:42.828355] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.828357] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.828360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.454 [2024-12-09 05:21:42.828365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.454 [2024-12-09 05:21:42.828376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.454 [2024-12-09 05:21:42.828415] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.454 [2024-12-09 05:21:42.828419] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.454 [2024-12-09 05:21:42.828422] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.828425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.454 [2024-12-09 05:21:42.828433] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.828435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.454 [2024-12-09 05:21:42.828438] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.454 [2024-12-09 05:21:42.828443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.454 [2024-12-09 05:21:42.828454] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.454 [2024-12-09 05:21:42.828492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.454 [2024-12-09 05:21:42.828501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.455 [2024-12-09 05:21:42.828504] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.455 [2024-12-09 05:21:42.828507] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.455 [2024-12-09 05:21:42.828514] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.455 [2024-12-09 05:21:42.828517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.455 [2024-12-09 05:21:42.828520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.455 [2024-12-09 05:21:42.828525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.455 [2024-12-09 05:21:42.828536] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.455 [2024-12-09 05:21:42.832570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.455 [2024-12-09 05:21:42.832583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.455 [2024-12-09 05:21:42.832585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.455 [2024-12-09 05:21:42.832588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.455 [2024-12-09 05:21:42.832596] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.455 [2024-12-09 05:21:42.832599] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.455 [2024-12-09 05:21:42.832601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd52d90) 01:26:51.455 [2024-12-09 05:21:42.832607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.455 [2024-12-09 05:21:42.832620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd93a80, cid 3, qid 0 01:26:51.455 [2024-12-09 05:21:42.832666] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.455 [2024-12-09 05:21:42.832671] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.455 [2024-12-09 05:21:42.832673] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.455 [2024-12-09 05:21:42.832676] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd93a80) on tqpair=0xd52d90 01:26:51.455 [2024-12-09 05:21:42.832682] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 01:26:51.455 01:26:51.455 05:21:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 01:26:51.455 [2024-12-09 05:21:42.950780] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:51.455 [2024-12-09 05:21:42.950814] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86345 ] 01:26:51.716 [2024-12-09 05:21:43.100052] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 01:26:51.716 [2024-12-09 05:21:43.100102] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 01:26:51.716 [2024-12-09 05:21:43.100105] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 01:26:51.716 [2024-12-09 05:21:43.100115] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 01:26:51.716 [2024-12-09 05:21:43.100123] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 01:26:51.716 [2024-12-09 05:21:43.100318] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 01:26:51.716 [2024-12-09 05:21:43.100346] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xfe4d90 0 01:26:51.716 [2024-12-09 05:21:43.106574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 01:26:51.716 [2024-12-09 05:21:43.106589] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 01:26:51.716 [2024-12-09 05:21:43.106593] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 01:26:51.716 [2024-12-09 05:21:43.106595] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 01:26:51.716 [2024-12-09 05:21:43.106620] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.716 [2024-12-09 05:21:43.106623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.716 [2024-12-09 05:21:43.106626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe4d90) 01:26:51.716 [2024-12-09 05:21:43.106636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 01:26:51.716 [2024-12-09 05:21:43.106656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025600, cid 0, qid 0 01:26:51.716 [2024-12-09 05:21:43.114580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.716 [2024-12-09 05:21:43.114594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.716 [2024-12-09 05:21:43.114596] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.716 [2024-12-09 05:21:43.114600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025600) on tqpair=0xfe4d90 01:26:51.716 [2024-12-09 05:21:43.114607] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 01:26:51.716 [2024-12-09 05:21:43.114612] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 01:26:51.716 [2024-12-09 05:21:43.114616] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 01:26:51.716 [2024-12-09 05:21:43.114626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.716 [2024-12-09 05:21:43.114629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.716 [2024-12-09 05:21:43.114631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe4d90) 01:26:51.716 [2024-12-09 05:21:43.114638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.716 [2024-12-09 05:21:43.114655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025600, cid 0, qid 0 01:26:51.716 [2024-12-09 05:21:43.114708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.716 [2024-12-09 05:21:43.114712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.716 [2024-12-09 05:21:43.114715] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.716 [2024-12-09 05:21:43.114718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025600) on tqpair=0xfe4d90 01:26:51.716 [2024-12-09 05:21:43.114722] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 01:26:51.716 [2024-12-09 05:21:43.114727] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 01:26:51.716 [2024-12-09 05:21:43.114732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.716 [2024-12-09 05:21:43.114734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.716 [2024-12-09 05:21:43.114737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe4d90) 01:26:51.716 [2024-12-09 05:21:43.114742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.716 [2024-12-09 05:21:43.114754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025600, cid 0, qid 0 01:26:51.716 [2024-12-09 05:21:43.114801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.716 [2024-12-09 05:21:43.114806] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.716 [2024-12-09 05:21:43.114808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.716 [2024-12-09 05:21:43.114811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025600) on tqpair=0xfe4d90 01:26:51.716 [2024-12-09 05:21:43.114814] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 01:26:51.716 [2024-12-09 05:21:43.114820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 01:26:51.716 [2024-12-09 05:21:43.114825] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.716 [2024-12-09 05:21:43.114827] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.716 [2024-12-09 05:21:43.114830] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe4d90) 01:26:51.716 [2024-12-09 05:21:43.114835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.716 [2024-12-09 05:21:43.114846] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025600, cid 0, qid 0 01:26:51.716 [2024-12-09 05:21:43.114888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.716 [2024-12-09 05:21:43.114893] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.716 [2024-12-09 05:21:43.114895] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.716 [2024-12-09 05:21:43.114898] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025600) on tqpair=0xfe4d90 01:26:51.716 [2024-12-09 05:21:43.114902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 01:26:51.716 [2024-12-09 05:21:43.114909] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.716 [2024-12-09 05:21:43.114911] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.716 [2024-12-09 05:21:43.114913] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe4d90) 01:26:51.716 [2024-12-09 05:21:43.114919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.716 [2024-12-09 05:21:43.114929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025600, cid 0, qid 0 01:26:51.716 [2024-12-09 05:21:43.114976] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.716 [2024-12-09 05:21:43.114981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.716 [2024-12-09 05:21:43.114983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.716 [2024-12-09 05:21:43.114986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025600) on tqpair=0xfe4d90 01:26:51.716 [2024-12-09 05:21:43.114989] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 01:26:51.716 [2024-12-09 05:21:43.114992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 01:26:51.716 [2024-12-09 05:21:43.114998] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 01:26:51.717 [2024-12-09 05:21:43.115101] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 01:26:51.717 [2024-12-09 05:21:43.115104] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 01:26:51.717 [2024-12-09 05:21:43.115111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe4d90) 01:26:51.717 [2024-12-09 05:21:43.115122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.717 [2024-12-09 05:21:43.115132] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025600, cid 0, qid 0 01:26:51.717 [2024-12-09 05:21:43.115175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.717 [2024-12-09 05:21:43.115180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.717 [2024-12-09 05:21:43.115182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025600) on tqpair=0xfe4d90 01:26:51.717 [2024-12-09 05:21:43.115188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 01:26:51.717 [2024-12-09 05:21:43.115195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115197] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115200] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe4d90) 01:26:51.717 [2024-12-09 05:21:43.115205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.717 [2024-12-09 05:21:43.115216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025600, cid 0, qid 0 01:26:51.717 [2024-12-09 05:21:43.115257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.717 [2024-12-09 05:21:43.115262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.717 [2024-12-09 05:21:43.115264] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115267] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025600) on tqpair=0xfe4d90 01:26:51.717 [2024-12-09 05:21:43.115270] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 01:26:51.717 [2024-12-09 05:21:43.115273] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 01:26:51.717 [2024-12-09 05:21:43.115278] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 01:26:51.717 [2024-12-09 05:21:43.115284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 01:26:51.717 [2024-12-09 05:21:43.115291] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115294] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe4d90) 01:26:51.717 [2024-12-09 05:21:43.115299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.717 [2024-12-09 05:21:43.115311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025600, cid 0, qid 0 01:26:51.717 [2024-12-09 05:21:43.115382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:26:51.717 [2024-12-09 05:21:43.115387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:26:51.717 [2024-12-09 05:21:43.115389] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115392] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe4d90): datao=0, datal=4096, cccid=0 01:26:51.717 [2024-12-09 05:21:43.115395] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1025600) on tqpair(0xfe4d90): expected_datao=0, payload_size=4096 01:26:51.717 [2024-12-09 05:21:43.115398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115404] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115407] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115413] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.717 [2024-12-09 05:21:43.115418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.717 [2024-12-09 05:21:43.115420] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025600) on tqpair=0xfe4d90 01:26:51.717 [2024-12-09 05:21:43.115428] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 01:26:51.717 [2024-12-09 05:21:43.115431] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 01:26:51.717 [2024-12-09 05:21:43.115434] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 01:26:51.717 [2024-12-09 05:21:43.115440] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 01:26:51.717 [2024-12-09 05:21:43.115443] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 01:26:51.717 [2024-12-09 05:21:43.115446] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 01:26:51.717 [2024-12-09 05:21:43.115452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 01:26:51.717 [2024-12-09 05:21:43.115457] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115462] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe4d90) 01:26:51.717 [2024-12-09 05:21:43.115467] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 01:26:51.717 [2024-12-09 05:21:43.115479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025600, cid 0, qid 0 01:26:51.717 [2024-12-09 05:21:43.115524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.717 [2024-12-09 05:21:43.115529] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.717 [2024-12-09 05:21:43.115531] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025600) on tqpair=0xfe4d90 01:26:51.717 [2024-12-09 05:21:43.115539] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115542] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115544] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe4d90) 01:26:51.717 [2024-12-09 05:21:43.115549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:26:51.717 [2024-12-09 05:21:43.115553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xfe4d90) 01:26:51.717 [2024-12-09 05:21:43.115576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:26:51.717 [2024-12-09 05:21:43.115580] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115583] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115585] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xfe4d90) 01:26:51.717 [2024-12-09 05:21:43.115590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:26:51.717 [2024-12-09 05:21:43.115594] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4d90) 01:26:51.717 [2024-12-09 05:21:43.115603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:26:51.717 [2024-12-09 05:21:43.115606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 01:26:51.717 [2024-12-09 05:21:43.115612] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 01:26:51.717 [2024-12-09 05:21:43.115617] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe4d90) 01:26:51.717 [2024-12-09 05:21:43.115625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.717 [2024-12-09 05:21:43.115640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025600, cid 0, qid 0 01:26:51.717 [2024-12-09 05:21:43.115644] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025780, cid 1, qid 0 01:26:51.717 [2024-12-09 05:21:43.115648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025900, cid 2, qid 0 01:26:51.717 [2024-12-09 05:21:43.115652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025a80, cid 3, qid 0 01:26:51.717 [2024-12-09 05:21:43.115655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025c00, cid 4, qid 0 01:26:51.717 [2024-12-09 05:21:43.115739] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.717 [2024-12-09 05:21:43.115744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.717 [2024-12-09 05:21:43.115746] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025c00) on tqpair=0xfe4d90 01:26:51.717 [2024-12-09 05:21:43.115754] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 01:26:51.717 [2024-12-09 05:21:43.115757] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 01:26:51.717 [2024-12-09 05:21:43.115763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 01:26:51.717 [2024-12-09 05:21:43.115767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 01:26:51.717 [2024-12-09 05:21:43.115772] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.717 [2024-12-09 05:21:43.115777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe4d90) 01:26:51.717 [2024-12-09 05:21:43.115782] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:26:51.717 [2024-12-09 05:21:43.115793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025c00, cid 4, qid 0 01:26:51.718 [2024-12-09 05:21:43.115842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.718 [2024-12-09 05:21:43.115847] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.718 [2024-12-09 05:21:43.115849] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.115852] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025c00) on tqpair=0xfe4d90 01:26:51.718 [2024-12-09 05:21:43.115909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 01:26:51.718 [2024-12-09 05:21:43.115916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 01:26:51.718 [2024-12-09 05:21:43.115921] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.115930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe4d90) 01:26:51.718 [2024-12-09 05:21:43.115935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.718 [2024-12-09 05:21:43.115947] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025c00, cid 4, qid 0 01:26:51.718 [2024-12-09 05:21:43.115996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:26:51.718 [2024-12-09 05:21:43.116000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:26:51.718 [2024-12-09 05:21:43.116003] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116005] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe4d90): datao=0, datal=4096, cccid=4 01:26:51.718 [2024-12-09 05:21:43.116008] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1025c00) on tqpair(0xfe4d90): expected_datao=0, payload_size=4096 01:26:51.718 [2024-12-09 05:21:43.116011] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116016] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116019] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.718 [2024-12-09 05:21:43.116029] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.718 [2024-12-09 05:21:43.116031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025c00) on tqpair=0xfe4d90 01:26:51.718 [2024-12-09 05:21:43.116041] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 01:26:51.718 [2024-12-09 05:21:43.116048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 01:26:51.718 [2024-12-09 05:21:43.116055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 01:26:51.718 [2024-12-09 05:21:43.116060] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe4d90) 01:26:51.718 [2024-12-09 05:21:43.116067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.718 [2024-12-09 05:21:43.116079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025c00, cid 4, qid 0 01:26:51.718 [2024-12-09 05:21:43.116143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:26:51.718 [2024-12-09 05:21:43.116148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:26:51.718 [2024-12-09 05:21:43.116150] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116153] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe4d90): datao=0, datal=4096, cccid=4 01:26:51.718 [2024-12-09 05:21:43.116155] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1025c00) on tqpair(0xfe4d90): expected_datao=0, payload_size=4096 01:26:51.718 [2024-12-09 05:21:43.116158] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116163] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116166] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.718 [2024-12-09 05:21:43.116176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.718 [2024-12-09 05:21:43.116178] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116181] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025c00) on tqpair=0xfe4d90 01:26:51.718 [2024-12-09 05:21:43.116193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 01:26:51.718 [2024-12-09 05:21:43.116200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 01:26:51.718 [2024-12-09 05:21:43.116205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe4d90) 01:26:51.718 [2024-12-09 05:21:43.116212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.718 [2024-12-09 05:21:43.116232] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025c00, cid 4, qid 0 01:26:51.718 [2024-12-09 05:21:43.116280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:26:51.718 [2024-12-09 05:21:43.116285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:26:51.718 [2024-12-09 05:21:43.116287] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116290] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe4d90): datao=0, datal=4096, cccid=4 01:26:51.718 [2024-12-09 05:21:43.116293] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1025c00) on tqpair(0xfe4d90): expected_datao=0, payload_size=4096 01:26:51.718 [2024-12-09 05:21:43.116296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116301] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116303] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116309] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.718 [2024-12-09 05:21:43.116314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.718 [2024-12-09 05:21:43.116316] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025c00) on tqpair=0xfe4d90 01:26:51.718 [2024-12-09 05:21:43.116324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 01:26:51.718 [2024-12-09 05:21:43.116330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 01:26:51.718 [2024-12-09 05:21:43.116337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 01:26:51.718 [2024-12-09 05:21:43.116341] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 01:26:51.718 [2024-12-09 05:21:43.116344] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 01:26:51.718 [2024-12-09 05:21:43.116348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 01:26:51.718 [2024-12-09 05:21:43.116352] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 01:26:51.718 [2024-12-09 05:21:43.116355] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 01:26:51.718 [2024-12-09 05:21:43.116359] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 01:26:51.718 [2024-12-09 05:21:43.116369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe4d90) 01:26:51.718 [2024-12-09 05:21:43.116377] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.718 [2024-12-09 05:21:43.116382] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfe4d90) 01:26:51.718 [2024-12-09 05:21:43.116392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 01:26:51.718 [2024-12-09 05:21:43.116406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025c00, cid 4, qid 0 01:26:51.718 [2024-12-09 05:21:43.116410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025d80, cid 5, qid 0 01:26:51.718 [2024-12-09 05:21:43.116467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.718 [2024-12-09 05:21:43.116472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.718 [2024-12-09 05:21:43.116474] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025c00) on tqpair=0xfe4d90 01:26:51.718 [2024-12-09 05:21:43.116482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.718 [2024-12-09 05:21:43.116487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.718 [2024-12-09 05:21:43.116489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025d80) on tqpair=0xfe4d90 01:26:51.718 [2024-12-09 05:21:43.116499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116502] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfe4d90) 01:26:51.718 [2024-12-09 05:21:43.116507] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.718 [2024-12-09 05:21:43.116517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025d80, cid 5, qid 0 01:26:51.718 [2024-12-09 05:21:43.116578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.718 [2024-12-09 05:21:43.116588] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.718 [2024-12-09 05:21:43.116591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025d80) on tqpair=0xfe4d90 01:26:51.718 [2024-12-09 05:21:43.116601] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.718 [2024-12-09 05:21:43.116603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfe4d90) 01:26:51.718 [2024-12-09 05:21:43.116608] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.719 [2024-12-09 05:21:43.116620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025d80, cid 5, qid 0 01:26:51.719 [2024-12-09 05:21:43.116659] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.719 [2024-12-09 05:21:43.116664] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.719 [2024-12-09 05:21:43.116667] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.116670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025d80) on tqpair=0xfe4d90 01:26:51.719 [2024-12-09 05:21:43.116677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.116679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfe4d90) 01:26:51.719 [2024-12-09 05:21:43.116684] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.719 [2024-12-09 05:21:43.116695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025d80, cid 5, qid 0 01:26:51.719 [2024-12-09 05:21:43.116734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.719 [2024-12-09 05:21:43.116739] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.719 [2024-12-09 05:21:43.116741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.116744] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025d80) on tqpair=0xfe4d90 01:26:51.719 [2024-12-09 05:21:43.116755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.116757] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfe4d90) 01:26:51.719 [2024-12-09 05:21:43.116763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.719 [2024-12-09 05:21:43.116768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.116770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe4d90) 01:26:51.719 [2024-12-09 05:21:43.116775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.719 [2024-12-09 05:21:43.116781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.116783] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xfe4d90) 01:26:51.719 [2024-12-09 05:21:43.116788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.719 [2024-12-09 05:21:43.116794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.116796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xfe4d90) 01:26:51.719 [2024-12-09 05:21:43.116801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.719 [2024-12-09 05:21:43.116812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025d80, cid 5, qid 0 01:26:51.719 [2024-12-09 05:21:43.116816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025c00, cid 4, qid 0 01:26:51.719 [2024-12-09 05:21:43.116820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025f00, cid 6, qid 0 01:26:51.719 [2024-12-09 05:21:43.116823] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026080, cid 7, qid 0 01:26:51.719 [2024-12-09 05:21:43.116949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:26:51.719 [2024-12-09 05:21:43.116954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:26:51.719 [2024-12-09 05:21:43.116956] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.116958] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe4d90): datao=0, datal=8192, cccid=5 01:26:51.719 [2024-12-09 05:21:43.116961] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1025d80) on tqpair(0xfe4d90): expected_datao=0, payload_size=8192 01:26:51.719 [2024-12-09 05:21:43.116964] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.116977] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.116979] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.116984] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:26:51.719 [2024-12-09 05:21:43.116988] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:26:51.719 [2024-12-09 05:21:43.116990] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.116993] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe4d90): datao=0, datal=512, cccid=4 01:26:51.719 [2024-12-09 05:21:43.116996] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1025c00) on tqpair(0xfe4d90): expected_datao=0, payload_size=512 01:26:51.719 [2024-12-09 05:21:43.116998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.117003] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.117006] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.117010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:26:51.719 [2024-12-09 05:21:43.117014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:26:51.719 [2024-12-09 05:21:43.117016] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.117019] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe4d90): datao=0, datal=512, cccid=6 01:26:51.719 [2024-12-09 05:21:43.117022] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1025f00) on tqpair(0xfe4d90): expected_datao=0, payload_size=512 01:26:51.719 [2024-12-09 05:21:43.117025] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.117030] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.117033] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.117037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:26:51.719 [2024-12-09 05:21:43.117041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:26:51.719 [2024-12-09 05:21:43.117043] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.117046] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe4d90): datao=0, datal=4096, cccid=7 01:26:51.719 [2024-12-09 05:21:43.117049] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1026080) on tqpair(0xfe4d90): expected_datao=0, payload_size=4096 01:26:51.719 [2024-12-09 05:21:43.117051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.117057] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.117059] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.117065] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.719 [2024-12-09 05:21:43.117070] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.719 [2024-12-09 05:21:43.117072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.117075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025d80) on tqpair=0xfe4d90 01:26:51.719 [2024-12-09 05:21:43.117085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.719 [2024-12-09 05:21:43.117089] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.719 [2024-12-09 05:21:43.117092] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.117094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025c00) on tqpair=0xfe4d90 01:26:51.719 [2024-12-09 05:21:43.117103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.719 [2024-12-09 05:21:43.117108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.719 [2024-12-09 05:21:43.117110] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.117113] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025f00) on tqpair=0xfe4d90 01:26:51.719 [2024-12-09 05:21:43.117118] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.719 [2024-12-09 05:21:43.117123] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.719 [2024-12-09 05:21:43.117125] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.719 [2024-12-09 05:21:43.117127] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1026080) on tqpair=0xfe4d90 01:26:51.719 ===================================================== 01:26:51.719 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:26:51.719 ===================================================== 01:26:51.719 Controller Capabilities/Features 01:26:51.719 ================================ 01:26:51.719 Vendor ID: 8086 01:26:51.719 Subsystem Vendor ID: 8086 01:26:51.719 Serial Number: SPDK00000000000001 01:26:51.719 Model Number: SPDK bdev Controller 01:26:51.719 Firmware Version: 25.01 01:26:51.719 Recommended Arb Burst: 6 01:26:51.719 IEEE OUI Identifier: e4 d2 5c 01:26:51.719 Multi-path I/O 01:26:51.719 May have multiple subsystem ports: Yes 01:26:51.719 May have multiple controllers: Yes 01:26:51.719 Associated with SR-IOV VF: No 01:26:51.719 Max Data Transfer Size: 131072 01:26:51.719 Max Number of Namespaces: 32 01:26:51.719 Max Number of I/O Queues: 127 01:26:51.719 NVMe Specification Version (VS): 1.3 01:26:51.719 NVMe Specification Version (Identify): 1.3 01:26:51.719 Maximum Queue Entries: 128 01:26:51.719 Contiguous Queues Required: Yes 01:26:51.719 Arbitration Mechanisms Supported 01:26:51.719 Weighted Round Robin: Not Supported 01:26:51.719 Vendor Specific: Not Supported 01:26:51.719 Reset Timeout: 15000 ms 01:26:51.719 Doorbell Stride: 4 bytes 01:26:51.719 NVM Subsystem Reset: Not Supported 01:26:51.719 Command Sets Supported 01:26:51.719 NVM Command Set: Supported 01:26:51.719 Boot Partition: Not Supported 01:26:51.719 Memory Page Size Minimum: 4096 bytes 01:26:51.719 Memory Page Size Maximum: 4096 bytes 01:26:51.719 Persistent Memory Region: Not Supported 01:26:51.719 Optional Asynchronous Events Supported 01:26:51.719 Namespace Attribute Notices: Supported 01:26:51.719 Firmware Activation Notices: Not Supported 01:26:51.719 ANA Change Notices: Not Supported 01:26:51.719 PLE Aggregate Log Change Notices: Not Supported 01:26:51.719 LBA Status Info Alert Notices: Not Supported 01:26:51.719 EGE Aggregate Log Change Notices: Not Supported 01:26:51.719 Normal NVM Subsystem Shutdown event: Not Supported 01:26:51.719 Zone Descriptor Change Notices: Not Supported 01:26:51.719 Discovery Log Change Notices: Not Supported 01:26:51.720 Controller Attributes 01:26:51.720 128-bit Host Identifier: Supported 01:26:51.720 Non-Operational Permissive Mode: Not Supported 01:26:51.720 NVM Sets: Not Supported 01:26:51.720 Read Recovery Levels: Not Supported 01:26:51.720 Endurance Groups: Not Supported 01:26:51.720 Predictable Latency Mode: Not Supported 01:26:51.720 Traffic Based Keep ALive: Not Supported 01:26:51.720 Namespace Granularity: Not Supported 01:26:51.720 SQ Associations: Not Supported 01:26:51.720 UUID List: Not Supported 01:26:51.720 Multi-Domain Subsystem: Not Supported 01:26:51.720 Fixed Capacity Management: Not Supported 01:26:51.720 Variable Capacity Management: Not Supported 01:26:51.720 Delete Endurance Group: Not Supported 01:26:51.720 Delete NVM Set: Not Supported 01:26:51.720 Extended LBA Formats Supported: Not Supported 01:26:51.720 Flexible Data Placement Supported: Not Supported 01:26:51.720 01:26:51.720 Controller Memory Buffer Support 01:26:51.720 ================================ 01:26:51.720 Supported: No 01:26:51.720 01:26:51.720 Persistent Memory Region Support 01:26:51.720 ================================ 01:26:51.720 Supported: No 01:26:51.720 01:26:51.720 Admin Command Set Attributes 01:26:51.720 ============================ 01:26:51.720 Security Send/Receive: Not Supported 01:26:51.720 Format NVM: Not Supported 01:26:51.720 Firmware Activate/Download: Not Supported 01:26:51.720 Namespace Management: Not Supported 01:26:51.720 Device Self-Test: Not Supported 01:26:51.720 Directives: Not Supported 01:26:51.720 NVMe-MI: Not Supported 01:26:51.720 Virtualization Management: Not Supported 01:26:51.720 Doorbell Buffer Config: Not Supported 01:26:51.720 Get LBA Status Capability: Not Supported 01:26:51.720 Command & Feature Lockdown Capability: Not Supported 01:26:51.720 Abort Command Limit: 4 01:26:51.720 Async Event Request Limit: 4 01:26:51.720 Number of Firmware Slots: N/A 01:26:51.720 Firmware Slot 1 Read-Only: N/A 01:26:51.720 Firmware Activation Without Reset: N/A 01:26:51.720 Multiple Update Detection Support: N/A 01:26:51.720 Firmware Update Granularity: No Information Provided 01:26:51.720 Per-Namespace SMART Log: No 01:26:51.720 Asymmetric Namespace Access Log Page: Not Supported 01:26:51.720 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 01:26:51.720 Command Effects Log Page: Supported 01:26:51.720 Get Log Page Extended Data: Supported 01:26:51.720 Telemetry Log Pages: Not Supported 01:26:51.720 Persistent Event Log Pages: Not Supported 01:26:51.720 Supported Log Pages Log Page: May Support 01:26:51.720 Commands Supported & Effects Log Page: Not Supported 01:26:51.720 Feature Identifiers & Effects Log Page:May Support 01:26:51.720 NVMe-MI Commands & Effects Log Page: May Support 01:26:51.720 Data Area 4 for Telemetry Log: Not Supported 01:26:51.720 Error Log Page Entries Supported: 128 01:26:51.720 Keep Alive: Supported 01:26:51.720 Keep Alive Granularity: 10000 ms 01:26:51.720 01:26:51.720 NVM Command Set Attributes 01:26:51.720 ========================== 01:26:51.720 Submission Queue Entry Size 01:26:51.720 Max: 64 01:26:51.720 Min: 64 01:26:51.720 Completion Queue Entry Size 01:26:51.720 Max: 16 01:26:51.720 Min: 16 01:26:51.720 Number of Namespaces: 32 01:26:51.720 Compare Command: Supported 01:26:51.720 Write Uncorrectable Command: Not Supported 01:26:51.720 Dataset Management Command: Supported 01:26:51.720 Write Zeroes Command: Supported 01:26:51.720 Set Features Save Field: Not Supported 01:26:51.720 Reservations: Supported 01:26:51.720 Timestamp: Not Supported 01:26:51.720 Copy: Supported 01:26:51.720 Volatile Write Cache: Present 01:26:51.720 Atomic Write Unit (Normal): 1 01:26:51.720 Atomic Write Unit (PFail): 1 01:26:51.720 Atomic Compare & Write Unit: 1 01:26:51.720 Fused Compare & Write: Supported 01:26:51.720 Scatter-Gather List 01:26:51.720 SGL Command Set: Supported 01:26:51.720 SGL Keyed: Supported 01:26:51.720 SGL Bit Bucket Descriptor: Not Supported 01:26:51.720 SGL Metadata Pointer: Not Supported 01:26:51.720 Oversized SGL: Not Supported 01:26:51.720 SGL Metadata Address: Not Supported 01:26:51.720 SGL Offset: Supported 01:26:51.720 Transport SGL Data Block: Not Supported 01:26:51.720 Replay Protected Memory Block: Not Supported 01:26:51.720 01:26:51.720 Firmware Slot Information 01:26:51.720 ========================= 01:26:51.720 Active slot: 1 01:26:51.720 Slot 1 Firmware Revision: 25.01 01:26:51.720 01:26:51.720 01:26:51.720 Commands Supported and Effects 01:26:51.720 ============================== 01:26:51.720 Admin Commands 01:26:51.720 -------------- 01:26:51.720 Get Log Page (02h): Supported 01:26:51.720 Identify (06h): Supported 01:26:51.720 Abort (08h): Supported 01:26:51.720 Set Features (09h): Supported 01:26:51.720 Get Features (0Ah): Supported 01:26:51.720 Asynchronous Event Request (0Ch): Supported 01:26:51.720 Keep Alive (18h): Supported 01:26:51.720 I/O Commands 01:26:51.720 ------------ 01:26:51.720 Flush (00h): Supported LBA-Change 01:26:51.720 Write (01h): Supported LBA-Change 01:26:51.720 Read (02h): Supported 01:26:51.720 Compare (05h): Supported 01:26:51.720 Write Zeroes (08h): Supported LBA-Change 01:26:51.720 Dataset Management (09h): Supported LBA-Change 01:26:51.720 Copy (19h): Supported LBA-Change 01:26:51.720 01:26:51.720 Error Log 01:26:51.720 ========= 01:26:51.720 01:26:51.720 Arbitration 01:26:51.720 =========== 01:26:51.720 Arbitration Burst: 1 01:26:51.720 01:26:51.720 Power Management 01:26:51.720 ================ 01:26:51.720 Number of Power States: 1 01:26:51.720 Current Power State: Power State #0 01:26:51.720 Power State #0: 01:26:51.720 Max Power: 0.00 W 01:26:51.720 Non-Operational State: Operational 01:26:51.720 Entry Latency: Not Reported 01:26:51.720 Exit Latency: Not Reported 01:26:51.720 Relative Read Throughput: 0 01:26:51.720 Relative Read Latency: 0 01:26:51.720 Relative Write Throughput: 0 01:26:51.720 Relative Write Latency: 0 01:26:51.720 Idle Power: Not Reported 01:26:51.720 Active Power: Not Reported 01:26:51.720 Non-Operational Permissive Mode: Not Supported 01:26:51.720 01:26:51.720 Health Information 01:26:51.720 ================== 01:26:51.720 Critical Warnings: 01:26:51.720 Available Spare Space: OK 01:26:51.720 Temperature: OK 01:26:51.720 Device Reliability: OK 01:26:51.720 Read Only: No 01:26:51.720 Volatile Memory Backup: OK 01:26:51.720 Current Temperature: 0 Kelvin (-273 Celsius) 01:26:51.720 Temperature Threshold: 0 Kelvin (-273 Celsius) 01:26:51.720 Available Spare: 0% 01:26:51.720 Available Spare Threshold: 0% 01:26:51.720 Life Percentage Used:[2024-12-09 05:21:43.117212] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.720 [2024-12-09 05:21:43.117216] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xfe4d90) 01:26:51.720 [2024-12-09 05:21:43.117222] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.720 [2024-12-09 05:21:43.117234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1026080, cid 7, qid 0 01:26:51.720 [2024-12-09 05:21:43.117283] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.720 [2024-12-09 05:21:43.117289] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.720 [2024-12-09 05:21:43.117291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.720 [2024-12-09 05:21:43.117294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1026080) on tqpair=0xfe4d90 01:26:51.720 [2024-12-09 05:21:43.117319] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 01:26:51.720 [2024-12-09 05:21:43.117325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025600) on tqpair=0xfe4d90 01:26:51.720 [2024-12-09 05:21:43.117330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:26:51.720 [2024-12-09 05:21:43.117334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025780) on tqpair=0xfe4d90 01:26:51.720 [2024-12-09 05:21:43.117337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:26:51.720 [2024-12-09 05:21:43.117341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025900) on tqpair=0xfe4d90 01:26:51.721 [2024-12-09 05:21:43.117344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:26:51.721 [2024-12-09 05:21:43.117347] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025a80) on tqpair=0xfe4d90 01:26:51.721 [2024-12-09 05:21:43.117350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:26:51.721 [2024-12-09 05:21:43.117356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117361] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4d90) 01:26:51.721 [2024-12-09 05:21:43.117367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.721 [2024-12-09 05:21:43.117379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025a80, cid 3, qid 0 01:26:51.721 [2024-12-09 05:21:43.117424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.721 [2024-12-09 05:21:43.117428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.721 [2024-12-09 05:21:43.117431] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025a80) on tqpair=0xfe4d90 01:26:51.721 [2024-12-09 05:21:43.117438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4d90) 01:26:51.721 [2024-12-09 05:21:43.117449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.721 [2024-12-09 05:21:43.117461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025a80, cid 3, qid 0 01:26:51.721 [2024-12-09 05:21:43.117510] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.721 [2024-12-09 05:21:43.117515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.721 [2024-12-09 05:21:43.117517] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025a80) on tqpair=0xfe4d90 01:26:51.721 [2024-12-09 05:21:43.117523] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 01:26:51.721 [2024-12-09 05:21:43.117526] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 01:26:51.721 [2024-12-09 05:21:43.117533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4d90) 01:26:51.721 [2024-12-09 05:21:43.117544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.721 [2024-12-09 05:21:43.117554] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025a80, cid 3, qid 0 01:26:51.721 [2024-12-09 05:21:43.117605] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.721 [2024-12-09 05:21:43.117610] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.721 [2024-12-09 05:21:43.117612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025a80) on tqpair=0xfe4d90 01:26:51.721 [2024-12-09 05:21:43.117622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117627] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4d90) 01:26:51.721 [2024-12-09 05:21:43.117632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.721 [2024-12-09 05:21:43.117644] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025a80, cid 3, qid 0 01:26:51.721 [2024-12-09 05:21:43.117681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.721 [2024-12-09 05:21:43.117686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.721 [2024-12-09 05:21:43.117689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025a80) on tqpair=0xfe4d90 01:26:51.721 [2024-12-09 05:21:43.117699] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117701] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4d90) 01:26:51.721 [2024-12-09 05:21:43.117709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.721 [2024-12-09 05:21:43.117719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025a80, cid 3, qid 0 01:26:51.721 [2024-12-09 05:21:43.117760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.721 [2024-12-09 05:21:43.117765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.721 [2024-12-09 05:21:43.117767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025a80) on tqpair=0xfe4d90 01:26:51.721 [2024-12-09 05:21:43.117777] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117785] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4d90) 01:26:51.721 [2024-12-09 05:21:43.117790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.721 [2024-12-09 05:21:43.117800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025a80, cid 3, qid 0 01:26:51.721 [2024-12-09 05:21:43.117839] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.721 [2024-12-09 05:21:43.117844] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.721 [2024-12-09 05:21:43.117846] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117849] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025a80) on tqpair=0xfe4d90 01:26:51.721 [2024-12-09 05:21:43.117856] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4d90) 01:26:51.721 [2024-12-09 05:21:43.117866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.721 [2024-12-09 05:21:43.117876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025a80, cid 3, qid 0 01:26:51.721 [2024-12-09 05:21:43.117920] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.721 [2024-12-09 05:21:43.117924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.721 [2024-12-09 05:21:43.117927] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117929] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025a80) on tqpair=0xfe4d90 01:26:51.721 [2024-12-09 05:21:43.117936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117939] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.117941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4d90) 01:26:51.721 [2024-12-09 05:21:43.117947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.721 [2024-12-09 05:21:43.117957] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025a80, cid 3, qid 0 01:26:51.721 [2024-12-09 05:21:43.117999] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.721 [2024-12-09 05:21:43.118004] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.721 [2024-12-09 05:21:43.118006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.721 [2024-12-09 05:21:43.118009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025a80) on tqpair=0xfe4d90 01:26:51.722 [2024-12-09 05:21:43.118016] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.118019] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.118021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4d90) 01:26:51.722 [2024-12-09 05:21:43.118026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.722 [2024-12-09 05:21:43.118036] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025a80, cid 3, qid 0 01:26:51.722 [2024-12-09 05:21:43.118083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.722 [2024-12-09 05:21:43.118088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.722 [2024-12-09 05:21:43.118090] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.118093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025a80) on tqpair=0xfe4d90 01:26:51.722 [2024-12-09 05:21:43.118100] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.118103] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.118105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4d90) 01:26:51.722 [2024-12-09 05:21:43.118110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.722 [2024-12-09 05:21:43.118120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025a80, cid 3, qid 0 01:26:51.722 [2024-12-09 05:21:43.118159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.722 [2024-12-09 05:21:43.118164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.722 [2024-12-09 05:21:43.118166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.118169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025a80) on tqpair=0xfe4d90 01:26:51.722 [2024-12-09 05:21:43.118176] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.118179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.118181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4d90) 01:26:51.722 [2024-12-09 05:21:43.118187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.722 [2024-12-09 05:21:43.118197] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025a80, cid 3, qid 0 01:26:51.722 [2024-12-09 05:21:43.118238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.722 [2024-12-09 05:21:43.118243] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.722 [2024-12-09 05:21:43.118246] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.118248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025a80) on tqpair=0xfe4d90 01:26:51.722 [2024-12-09 05:21:43.118255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.118258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.118261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4d90) 01:26:51.722 [2024-12-09 05:21:43.118266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.722 [2024-12-09 05:21:43.118276] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025a80, cid 3, qid 0 01:26:51.722 [2024-12-09 05:21:43.118315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.722 [2024-12-09 05:21:43.118320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.722 [2024-12-09 05:21:43.118322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.118325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025a80) on tqpair=0xfe4d90 01:26:51.722 [2024-12-09 05:21:43.118333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.118335] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.118338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4d90) 01:26:51.722 [2024-12-09 05:21:43.118343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.722 [2024-12-09 05:21:43.118354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025a80, cid 3, qid 0 01:26:51.722 [2024-12-09 05:21:43.118395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.722 [2024-12-09 05:21:43.118404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.722 [2024-12-09 05:21:43.118407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.118409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025a80) on tqpair=0xfe4d90 01:26:51.722 [2024-12-09 05:21:43.118417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.118420] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.118422] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4d90) 01:26:51.722 [2024-12-09 05:21:43.118428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.722 [2024-12-09 05:21:43.118439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025a80, cid 3, qid 0 01:26:51.722 [2024-12-09 05:21:43.118478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.722 [2024-12-09 05:21:43.118483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.722 [2024-12-09 05:21:43.118485] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.118488] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025a80) on tqpair=0xfe4d90 01:26:51.722 [2024-12-09 05:21:43.118495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.118498] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.118500] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4d90) 01:26:51.722 [2024-12-09 05:21:43.118505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.722 [2024-12-09 05:21:43.118516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025a80, cid 3, qid 0 01:26:51.722 [2024-12-09 05:21:43.122567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.722 [2024-12-09 05:21:43.122583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.722 [2024-12-09 05:21:43.122586] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.122589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025a80) on tqpair=0xfe4d90 01:26:51.722 [2024-12-09 05:21:43.122598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.122601] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.122604] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4d90) 01:26:51.722 [2024-12-09 05:21:43.122610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:51.722 [2024-12-09 05:21:43.122625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1025a80, cid 3, qid 0 01:26:51.722 [2024-12-09 05:21:43.122667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:26:51.722 [2024-12-09 05:21:43.122673] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:26:51.722 [2024-12-09 05:21:43.122676] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:26:51.722 [2024-12-09 05:21:43.122679] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1025a80) on tqpair=0xfe4d90 01:26:51.722 [2024-12-09 05:21:43.122686] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 01:26:51.722 0% 01:26:51.722 Data Units Read: 0 01:26:51.722 Data Units Written: 0 01:26:51.722 Host Read Commands: 0 01:26:51.722 Host Write Commands: 0 01:26:51.722 Controller Busy Time: 0 minutes 01:26:51.722 Power Cycles: 0 01:26:51.722 Power On Hours: 0 hours 01:26:51.722 Unsafe Shutdowns: 0 01:26:51.722 Unrecoverable Media Errors: 0 01:26:51.722 Lifetime Error Log Entries: 0 01:26:51.722 Warning Temperature Time: 0 minutes 01:26:51.722 Critical Temperature Time: 0 minutes 01:26:51.722 01:26:51.722 Number of Queues 01:26:51.722 ================ 01:26:51.722 Number of I/O Submission Queues: 127 01:26:51.722 Number of I/O Completion Queues: 127 01:26:51.722 01:26:51.722 Active Namespaces 01:26:51.722 ================= 01:26:51.722 Namespace ID:1 01:26:51.723 Error Recovery Timeout: Unlimited 01:26:51.723 Command Set Identifier: NVM (00h) 01:26:51.723 Deallocate: Supported 01:26:51.723 Deallocated/Unwritten Error: Not Supported 01:26:51.723 Deallocated Read Value: Unknown 01:26:51.723 Deallocate in Write Zeroes: Not Supported 01:26:51.723 Deallocated Guard Field: 0xFFFF 01:26:51.723 Flush: Supported 01:26:51.723 Reservation: Supported 01:26:51.723 Namespace Sharing Capabilities: Multiple Controllers 01:26:51.723 Size (in LBAs): 131072 (0GiB) 01:26:51.723 Capacity (in LBAs): 131072 (0GiB) 01:26:51.723 Utilization (in LBAs): 131072 (0GiB) 01:26:51.723 NGUID: ABCDEF0123456789ABCDEF0123456789 01:26:51.723 EUI64: ABCDEF0123456789 01:26:51.723 UUID: 5bb11e01-27f0-4610-8ebf-14c64bf72e61 01:26:51.723 Thin Provisioning: Not Supported 01:26:51.723 Per-NS Atomic Units: Yes 01:26:51.723 Atomic Boundary Size (Normal): 0 01:26:51.723 Atomic Boundary Size (PFail): 0 01:26:51.723 Atomic Boundary Offset: 0 01:26:51.723 Maximum Single Source Range Length: 65535 01:26:51.723 Maximum Copy Length: 65535 01:26:51.723 Maximum Source Range Count: 1 01:26:51.723 NGUID/EUI64 Never Reused: No 01:26:51.723 Namespace Write Protected: No 01:26:51.723 Number of LBA Formats: 1 01:26:51.723 Current LBA Format: LBA Format #00 01:26:51.723 LBA Format #00: Data Size: 512 Metadata Size: 0 01:26:51.723 01:26:51.723 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 01:26:51.723 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:26:51.723 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:51.723 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:26:51.723 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:51.723 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 01:26:51.723 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 01:26:51.723 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 01:26:51.723 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 01:26:51.723 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:26:51.723 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 01:26:51.723 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 01:26:51.723 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:26:51.723 rmmod nvme_tcp 01:26:51.723 rmmod nvme_fabrics 01:26:51.723 rmmod nvme_keyring 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 86280 ']' 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 86280 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 86280 ']' 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 86280 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86280 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:51.980 killing process with pid 86280 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86280' 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 86280 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 86280 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:26:51.980 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:26:52.238 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:26:52.238 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:26:52.238 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:26:52.238 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:26:52.238 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:26:52.238 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:26:52.238 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:26:52.238 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:26:52.238 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:52.238 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:52.238 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 01:26:52.238 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:52.238 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:52.238 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:52.238 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 01:26:52.238 01:26:52.238 real 0m2.770s 01:26:52.238 user 0m7.460s 01:26:52.238 sys 0m0.565s 01:26:52.238 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:52.238 05:21:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:26:52.238 ************************************ 01:26:52.238 END TEST nvmf_identify 01:26:52.238 ************************************ 01:26:52.238 05:21:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 01:26:52.238 05:21:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:26:52.238 05:21:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:52.238 05:21:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:26:52.238 ************************************ 01:26:52.238 START TEST nvmf_perf 01:26:52.238 ************************************ 01:26:52.238 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 01:26:52.496 * Looking for test storage... 01:26:52.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:26:52.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:52.496 --rc genhtml_branch_coverage=1 01:26:52.496 --rc genhtml_function_coverage=1 01:26:52.496 --rc genhtml_legend=1 01:26:52.496 --rc geninfo_all_blocks=1 01:26:52.496 --rc geninfo_unexecuted_blocks=1 01:26:52.496 01:26:52.496 ' 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:26:52.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:52.496 --rc genhtml_branch_coverage=1 01:26:52.496 --rc genhtml_function_coverage=1 01:26:52.496 --rc genhtml_legend=1 01:26:52.496 --rc geninfo_all_blocks=1 01:26:52.496 --rc geninfo_unexecuted_blocks=1 01:26:52.496 01:26:52.496 ' 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:26:52.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:52.496 --rc genhtml_branch_coverage=1 01:26:52.496 --rc genhtml_function_coverage=1 01:26:52.496 --rc genhtml_legend=1 01:26:52.496 --rc geninfo_all_blocks=1 01:26:52.496 --rc geninfo_unexecuted_blocks=1 01:26:52.496 01:26:52.496 ' 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:26:52.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:52.496 --rc genhtml_branch_coverage=1 01:26:52.496 --rc genhtml_function_coverage=1 01:26:52.496 --rc genhtml_legend=1 01:26:52.496 --rc geninfo_all_blocks=1 01:26:52.496 --rc geninfo_unexecuted_blocks=1 01:26:52.496 01:26:52.496 ' 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:52.496 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:26:52.497 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:26:52.497 05:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:26:52.497 Cannot find device "nvmf_init_br" 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:26:52.497 Cannot find device "nvmf_init_br2" 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:26:52.497 Cannot find device "nvmf_tgt_br" 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:26:52.497 Cannot find device "nvmf_tgt_br2" 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:26:52.497 Cannot find device "nvmf_init_br" 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:26:52.497 Cannot find device "nvmf_init_br2" 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:26:52.497 Cannot find device "nvmf_tgt_br" 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:26:52.497 Cannot find device "nvmf_tgt_br2" 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:26:52.497 Cannot find device "nvmf_br" 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:26:52.497 Cannot find device "nvmf_init_if" 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:26:52.497 Cannot find device "nvmf_init_if2" 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:52.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:52.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:26:52.497 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:26:52.754 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:26:52.754 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 01:26:52.754 01:26:52.754 --- 10.0.0.3 ping statistics --- 01:26:52.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:52.754 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:26:52.754 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:26:52.754 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.029 ms 01:26:52.754 01:26:52.754 --- 10.0.0.4 ping statistics --- 01:26:52.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:52.754 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:26:52.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:26:52.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 01:26:52.754 01:26:52.754 --- 10.0.0.1 ping statistics --- 01:26:52.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:52.754 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:26:52.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:26:52.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 01:26:52.754 01:26:52.754 --- 10.0.0.2 ping statistics --- 01:26:52.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:52.754 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=86560 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 86560 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 86560 ']' 01:26:52.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:52.754 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:52.755 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:52.755 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:52.755 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:52.755 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:26:52.755 05:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:26:52.755 [2024-12-09 05:21:44.327692] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:26:52.755 [2024-12-09 05:21:44.327748] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:26:53.011 [2024-12-09 05:21:44.469073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:26:53.011 [2024-12-09 05:21:44.504833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:26:53.011 [2024-12-09 05:21:44.504994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:26:53.011 [2024-12-09 05:21:44.505040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:26:53.011 [2024-12-09 05:21:44.505079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:26:53.011 [2024-12-09 05:21:44.505113] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:26:53.011 [2024-12-09 05:21:44.505847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:26:53.011 [2024-12-09 05:21:44.506008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:53.012 [2024-12-09 05:21:44.505940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:26:53.012 [2024-12-09 05:21:44.506008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:26:53.957 05:21:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:53.957 05:21:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 01:26:53.957 05:21:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:26:53.957 05:21:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 01:26:53.957 05:21:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:26:53.957 05:21:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:26:53.957 05:21:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:26:53.957 05:21:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 01:26:54.215 05:21:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 01:26:54.215 05:21:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 01:26:54.215 05:21:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 01:26:54.215 05:21:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:26:54.475 05:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 01:26:54.475 05:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 01:26:54.475 05:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 01:26:54.475 05:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 01:26:54.475 05:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:26:54.735 [2024-12-09 05:21:46.210162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:26:54.735 05:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:26:54.994 05:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 01:26:54.994 05:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:26:55.255 05:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 01:26:55.255 05:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 01:26:55.557 05:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:26:55.557 [2024-12-09 05:21:47.063374] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:26:55.557 05:21:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:26:55.816 05:21:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 01:26:55.816 05:21:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 01:26:55.816 05:21:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 01:26:55.816 05:21:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 01:26:56.756 Initializing NVMe Controllers 01:26:56.756 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:26:56.756 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 01:26:56.756 Initialization complete. Launching workers. 01:26:56.756 ======================================================== 01:26:56.756 Latency(us) 01:26:56.756 Device Information : IOPS MiB/s Average min max 01:26:56.756 PCIE (0000:00:10.0) NSID 1 from core 0: 33738.54 131.79 948.15 236.81 7919.91 01:26:56.756 ======================================================== 01:26:56.756 Total : 33738.54 131.79 948.15 236.81 7919.91 01:26:56.756 01:26:57.014 05:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:26:58.387 Initializing NVMe Controllers 01:26:58.387 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:26:58.387 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:26:58.387 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:26:58.387 Initialization complete. Launching workers. 01:26:58.387 ======================================================== 01:26:58.387 Latency(us) 01:26:58.387 Device Information : IOPS MiB/s Average min max 01:26:58.387 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6278.02 24.52 159.03 61.04 4200.68 01:26:58.387 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.50 0.49 8031.65 4999.59 11999.67 01:26:58.387 ======================================================== 01:26:58.387 Total : 6402.52 25.01 312.12 61.04 11999.67 01:26:58.387 01:26:58.387 05:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:26:59.759 Initializing NVMe Controllers 01:26:59.759 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:26:59.759 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:26:59.759 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:26:59.759 Initialization complete. Launching workers. 01:26:59.759 ======================================================== 01:26:59.759 Latency(us) 01:26:59.759 Device Information : IOPS MiB/s Average min max 01:26:59.759 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12127.68 47.37 2638.76 457.29 6311.30 01:26:59.759 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2711.30 10.59 11925.27 7629.40 20131.13 01:26:59.759 ======================================================== 01:26:59.759 Total : 14838.98 57.96 4335.55 457.29 20131.13 01:26:59.759 01:26:59.759 05:21:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 01:26:59.759 05:21:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:27:02.289 Initializing NVMe Controllers 01:27:02.289 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:27:02.289 Controller IO queue size 128, less than required. 01:27:02.289 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:27:02.289 Controller IO queue size 128, less than required. 01:27:02.289 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:27:02.289 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:27:02.290 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:27:02.290 Initialization complete. Launching workers. 01:27:02.290 ======================================================== 01:27:02.290 Latency(us) 01:27:02.290 Device Information : IOPS MiB/s Average min max 01:27:02.290 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2378.13 594.53 54456.26 35258.55 107429.01 01:27:02.290 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 621.51 155.38 215985.22 91588.95 328390.98 01:27:02.290 ======================================================== 01:27:02.290 Total : 2999.64 749.91 87924.28 35258.55 328390.98 01:27:02.290 01:27:02.290 05:21:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 01:27:02.548 Initializing NVMe Controllers 01:27:02.548 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:27:02.548 Controller IO queue size 128, less than required. 01:27:02.548 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:27:02.548 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 01:27:02.548 Controller IO queue size 128, less than required. 01:27:02.548 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:27:02.548 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 01:27:02.548 WARNING: Some requested NVMe devices were skipped 01:27:02.548 No valid NVMe controllers or AIO or URING devices found 01:27:02.806 05:21:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 01:27:05.332 Initializing NVMe Controllers 01:27:05.332 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:27:05.332 Controller IO queue size 128, less than required. 01:27:05.332 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:27:05.332 Controller IO queue size 128, less than required. 01:27:05.332 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:27:05.332 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:27:05.332 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:27:05.332 Initialization complete. Launching workers. 01:27:05.332 01:27:05.332 ==================== 01:27:05.332 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 01:27:05.332 TCP transport: 01:27:05.332 polls: 12653 01:27:05.332 idle_polls: 6043 01:27:05.332 sock_completions: 6610 01:27:05.332 nvme_completions: 8275 01:27:05.332 submitted_requests: 12434 01:27:05.332 queued_requests: 1 01:27:05.332 01:27:05.332 ==================== 01:27:05.332 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 01:27:05.332 TCP transport: 01:27:05.332 polls: 12709 01:27:05.332 idle_polls: 7822 01:27:05.332 sock_completions: 4887 01:27:05.332 nvme_completions: 9701 01:27:05.332 submitted_requests: 14714 01:27:05.332 queued_requests: 1 01:27:05.332 ======================================================== 01:27:05.332 Latency(us) 01:27:05.332 Device Information : IOPS MiB/s Average min max 01:27:05.332 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2066.43 516.61 63012.71 37623.68 134168.79 01:27:05.332 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2422.58 605.64 53087.06 27093.88 82636.16 01:27:05.332 ======================================================== 01:27:05.332 Total : 4489.01 1122.25 57656.15 27093.88 134168.79 01:27:05.332 01:27:05.332 05:21:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 01:27:05.332 05:21:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:27:05.600 05:21:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 01:27:05.600 05:21:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 01:27:05.600 05:21:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 01:27:05.600 05:21:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 01:27:05.600 05:21:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 01:27:05.600 05:21:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:27:05.600 05:21:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 01:27:05.600 05:21:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 01:27:05.600 05:21:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:27:05.600 rmmod nvme_tcp 01:27:05.600 rmmod nvme_fabrics 01:27:05.600 rmmod nvme_keyring 01:27:05.600 05:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:27:05.600 05:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 01:27:05.600 05:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 01:27:05.600 05:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 86560 ']' 01:27:05.600 05:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 86560 01:27:05.600 05:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 86560 ']' 01:27:05.600 05:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 86560 01:27:05.600 05:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 01:27:05.600 05:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:05.600 05:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86560 01:27:05.600 killing process with pid 86560 01:27:05.600 05:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:27:05.600 05:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:27:05.600 05:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86560' 01:27:05.600 05:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 86560 01:27:05.600 05:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 86560 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:27:07.519 05:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:27:07.519 05:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 01:27:07.519 01:27:07.519 real 0m15.194s 01:27:07.519 user 0m54.923s 01:27:07.519 sys 0m2.837s 01:27:07.519 05:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:07.519 ************************************ 01:27:07.519 END TEST nvmf_perf 01:27:07.519 ************************************ 01:27:07.520 05:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:27:07.520 05:21:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 01:27:07.520 05:21:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:27:07.520 05:21:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:07.520 05:21:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:27:07.520 ************************************ 01:27:07.520 START TEST nvmf_fio_host 01:27:07.520 ************************************ 01:27:07.520 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 01:27:07.796 * Looking for test storage... 01:27:07.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:27:07.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:07.796 --rc genhtml_branch_coverage=1 01:27:07.796 --rc genhtml_function_coverage=1 01:27:07.796 --rc genhtml_legend=1 01:27:07.796 --rc geninfo_all_blocks=1 01:27:07.796 --rc geninfo_unexecuted_blocks=1 01:27:07.796 01:27:07.796 ' 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:27:07.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:07.796 --rc genhtml_branch_coverage=1 01:27:07.796 --rc genhtml_function_coverage=1 01:27:07.796 --rc genhtml_legend=1 01:27:07.796 --rc geninfo_all_blocks=1 01:27:07.796 --rc geninfo_unexecuted_blocks=1 01:27:07.796 01:27:07.796 ' 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:27:07.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:07.796 --rc genhtml_branch_coverage=1 01:27:07.796 --rc genhtml_function_coverage=1 01:27:07.796 --rc genhtml_legend=1 01:27:07.796 --rc geninfo_all_blocks=1 01:27:07.796 --rc geninfo_unexecuted_blocks=1 01:27:07.796 01:27:07.796 ' 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:27:07.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:07.796 --rc genhtml_branch_coverage=1 01:27:07.796 --rc genhtml_function_coverage=1 01:27:07.796 --rc genhtml_legend=1 01:27:07.796 --rc geninfo_all_blocks=1 01:27:07.796 --rc geninfo_unexecuted_blocks=1 01:27:07.796 01:27:07.796 ' 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:07.796 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:27:07.797 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:27:07.797 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:27:07.798 Cannot find device "nvmf_init_br" 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:27:07.798 Cannot find device "nvmf_init_br2" 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:27:07.798 Cannot find device "nvmf_tgt_br" 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:27:07.798 Cannot find device "nvmf_tgt_br2" 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:27:07.798 Cannot find device "nvmf_init_br" 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:27:07.798 Cannot find device "nvmf_init_br2" 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:27:07.798 Cannot find device "nvmf_tgt_br" 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:27:07.798 Cannot find device "nvmf_tgt_br2" 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:27:07.798 Cannot find device "nvmf_br" 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:27:07.798 Cannot find device "nvmf_init_if" 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:27:07.798 Cannot find device "nvmf_init_if2" 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:27:07.798 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:27:07.798 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:27:07.798 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:27:08.056 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:27:08.056 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:27:08.056 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:27:08.056 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:27:08.056 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:27:08.056 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:27:08.056 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:27:08.056 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:27:08.056 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:27:08.056 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:27:08.056 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:27:08.056 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:27:08.056 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:27:08.056 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:27:08.056 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:27:08.056 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:27:08.056 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:27:08.056 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:27:08.056 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:27:08.056 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:27:08.056 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:27:08.056 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:27:08.057 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:27:08.057 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 01:27:08.057 01:27:08.057 --- 10.0.0.3 ping statistics --- 01:27:08.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:08.057 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:27:08.057 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:27:08.057 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.031 ms 01:27:08.057 01:27:08.057 --- 10.0.0.4 ping statistics --- 01:27:08.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:08.057 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:27:08.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:27:08.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 01:27:08.057 01:27:08.057 --- 10.0.0.1 ping statistics --- 01:27:08.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:08.057 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:27:08.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:27:08.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 01:27:08.057 01:27:08.057 --- 10.0.0.2 ping statistics --- 01:27:08.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:08.057 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87097 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87097 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 87097 ']' 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:27:08.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:08.057 05:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:27:08.057 [2024-12-09 05:21:59.565148] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:08.057 [2024-12-09 05:21:59.565203] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:27:08.315 [2024-12-09 05:21:59.699448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:27:08.315 [2024-12-09 05:21:59.730222] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:27:08.315 [2024-12-09 05:21:59.730265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:27:08.315 [2024-12-09 05:21:59.730270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:27:08.315 [2024-12-09 05:21:59.730274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:27:08.315 [2024-12-09 05:21:59.730278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:27:08.315 [2024-12-09 05:21:59.731063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:27:08.315 [2024-12-09 05:21:59.731194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:27:08.315 [2024-12-09 05:21:59.731514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:08.315 [2024-12-09 05:21:59.731510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:27:08.881 05:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:08.881 05:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 01:27:08.881 05:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:27:09.138 [2024-12-09 05:22:00.594204] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:27:09.138 05:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 01:27:09.138 05:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 01:27:09.138 05:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:27:09.138 05:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 01:27:09.398 Malloc1 01:27:09.398 05:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:27:09.660 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:27:09.660 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:27:09.920 [2024-12-09 05:22:01.458762] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:27:09.920 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:27:10.179 05:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 01:27:10.438 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 01:27:10.438 fio-3.35 01:27:10.438 Starting 1 thread 01:27:12.981 01:27:12.981 test: (groupid=0, jobs=1): err= 0: pid=87226: Mon Dec 9 05:22:04 2024 01:27:12.981 read: IOPS=13.4k, BW=52.3MiB/s (54.9MB/s)(105MiB/2005msec) 01:27:12.981 slat (nsec): min=1414, max=380966, avg=1556.84, stdev=3049.09 01:27:12.981 clat (usec): min=2476, max=9197, avg=4994.27, stdev=329.76 01:27:12.981 lat (usec): min=2477, max=9199, avg=4995.83, stdev=329.91 01:27:12.981 clat percentiles (usec): 01:27:12.981 | 1.00th=[ 4293], 5.00th=[ 4490], 10.00th=[ 4621], 20.00th=[ 4752], 01:27:12.981 | 30.00th=[ 4817], 40.00th=[ 4948], 50.00th=[ 5014], 60.00th=[ 5080], 01:27:12.981 | 70.00th=[ 5145], 80.00th=[ 5211], 90.00th=[ 5342], 95.00th=[ 5473], 01:27:12.981 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 6849], 99.95th=[ 7963], 01:27:12.981 | 99.99th=[ 8717] 01:27:12.981 bw ( KiB/s): min=52872, max=53888, per=99.97%, avg=53589.00, stdev=481.61, samples=4 01:27:12.981 iops : min=13218, max=13472, avg=13397.25, stdev=120.40, samples=4 01:27:12.981 write: IOPS=13.4k, BW=52.3MiB/s (54.9MB/s)(105MiB/2005msec); 0 zone resets 01:27:12.981 slat (nsec): min=1450, max=309234, avg=1600.98, stdev=2054.13 01:27:12.981 clat (usec): min=1555, max=8606, avg=4519.87, stdev=301.13 01:27:12.981 lat (usec): min=1557, max=8608, avg=4521.48, stdev=301.17 01:27:12.981 clat percentiles (usec): 01:27:12.981 | 1.00th=[ 3884], 5.00th=[ 4113], 10.00th=[ 4228], 20.00th=[ 4293], 01:27:12.981 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4490], 60.00th=[ 4555], 01:27:12.981 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 4817], 95.00th=[ 4948], 01:27:12.981 | 99.00th=[ 5145], 99.50th=[ 5276], 99.90th=[ 6849], 99.95th=[ 7898], 01:27:12.981 | 99.99th=[ 8455] 01:27:12.981 bw ( KiB/s): min=53200, max=53952, per=99.96%, avg=53545.00, stdev=314.93, samples=4 01:27:12.981 iops : min=13300, max=13488, avg=13386.25, stdev=78.73, samples=4 01:27:12.981 lat (msec) : 2=0.04%, 4=1.23%, 10=98.72% 01:27:12.981 cpu : usr=78.94%, sys=16.72%, ctx=7, majf=0, minf=7 01:27:12.981 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 01:27:12.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:27:12.982 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:27:12.982 issued rwts: total=26870,26851,0,0 short=0,0,0,0 dropped=0,0,0,0 01:27:12.982 latency : target=0, window=0, percentile=100.00%, depth=128 01:27:12.982 01:27:12.982 Run status group 0 (all jobs): 01:27:12.982 READ: bw=52.3MiB/s (54.9MB/s), 52.3MiB/s-52.3MiB/s (54.9MB/s-54.9MB/s), io=105MiB (110MB), run=2005-2005msec 01:27:12.982 WRITE: bw=52.3MiB/s (54.9MB/s), 52.3MiB/s-52.3MiB/s (54.9MB/s-54.9MB/s), io=105MiB (110MB), run=2005-2005msec 01:27:12.982 05:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 01:27:12.982 05:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 01:27:12.982 05:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:27:12.982 05:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:27:12.982 05:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 01:27:12.982 05:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:27:12.982 05:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 01:27:12.982 05:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 01:27:12.982 05:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:27:12.982 05:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:27:12.982 05:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:27:12.982 05:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 01:27:12.982 05:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 01:27:12.982 05:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:27:12.982 05:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:27:12.982 05:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:27:12.982 05:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:27:12.982 05:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:27:12.982 05:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 01:27:12.982 05:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:27:12.982 05:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:27:12.982 05:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 01:27:12.982 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 01:27:12.982 fio-3.35 01:27:12.982 Starting 1 thread 01:27:15.505 01:27:15.505 test: (groupid=0, jobs=1): err= 0: pid=87270: Mon Dec 9 05:22:06 2024 01:27:15.505 read: IOPS=12.6k, BW=196MiB/s (206MB/s)(394MiB/2006msec) 01:27:15.505 slat (nsec): min=2333, max=79648, avg=2493.33, stdev=1251.13 01:27:15.505 clat (usec): min=1696, max=11709, avg=5919.70, stdev=1413.71 01:27:15.505 lat (usec): min=1699, max=11712, avg=5922.20, stdev=1413.78 01:27:15.505 clat percentiles (usec): 01:27:15.505 | 1.00th=[ 3064], 5.00th=[ 3687], 10.00th=[ 4080], 20.00th=[ 4621], 01:27:15.505 | 30.00th=[ 5080], 40.00th=[ 5473], 50.00th=[ 5932], 60.00th=[ 6325], 01:27:15.505 | 70.00th=[ 6783], 80.00th=[ 7111], 90.00th=[ 7570], 95.00th=[ 8225], 01:27:15.505 | 99.00th=[ 9503], 99.50th=[ 9896], 99.90th=[10552], 99.95th=[11338], 01:27:15.505 | 99.99th=[11600] 01:27:15.505 bw ( KiB/s): min=98464, max=100608, per=49.39%, avg=99360.00, stdev=919.69, samples=4 01:27:15.505 iops : min= 6154, max= 6288, avg=6210.00, stdev=57.48, samples=4 01:27:15.505 write: IOPS=7312, BW=114MiB/s (120MB/s)(203MiB/1774msec); 0 zone resets 01:27:15.505 slat (usec): min=27, max=421, avg=28.16, stdev= 7.25 01:27:15.505 clat (usec): min=2235, max=13033, avg=7459.60, stdev=1314.94 01:27:15.505 lat (usec): min=2263, max=13060, avg=7487.75, stdev=1315.29 01:27:15.505 clat percentiles (usec): 01:27:15.505 | 1.00th=[ 4817], 5.00th=[ 5538], 10.00th=[ 5866], 20.00th=[ 6325], 01:27:15.505 | 30.00th=[ 6652], 40.00th=[ 7046], 50.00th=[ 7373], 60.00th=[ 7701], 01:27:15.505 | 70.00th=[ 8094], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[ 9765], 01:27:15.505 | 99.00th=[10814], 99.50th=[11207], 99.90th=[12518], 99.95th=[12911], 01:27:15.505 | 99.99th=[13042] 01:27:15.505 bw ( KiB/s): min=102848, max=104352, per=88.40%, avg=103432.00, stdev=648.67, samples=4 01:27:15.505 iops : min= 6428, max= 6522, avg=6464.50, stdev=40.54, samples=4 01:27:15.505 lat (msec) : 2=0.06%, 4=5.86%, 10=92.65%, 20=1.42% 01:27:15.505 cpu : usr=81.95%, sys=12.67%, ctx=4, majf=0, minf=16 01:27:15.505 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 01:27:15.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:27:15.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:27:15.505 issued rwts: total=25221,12973,0,0 short=0,0,0,0 dropped=0,0,0,0 01:27:15.505 latency : target=0, window=0, percentile=100.00%, depth=128 01:27:15.505 01:27:15.505 Run status group 0 (all jobs): 01:27:15.505 READ: bw=196MiB/s (206MB/s), 196MiB/s-196MiB/s (206MB/s-206MB/s), io=394MiB (413MB), run=2006-2006msec 01:27:15.505 WRITE: bw=114MiB/s (120MB/s), 114MiB/s-114MiB/s (120MB/s-120MB/s), io=203MiB (213MB), run=1774-1774msec 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:27:15.505 rmmod nvme_tcp 01:27:15.505 rmmod nvme_fabrics 01:27:15.505 rmmod nvme_keyring 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 87097 ']' 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 87097 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 87097 ']' 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 87097 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87097 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:27:15.505 killing process with pid 87097 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87097' 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 87097 01:27:15.505 05:22:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 87097 01:27:15.505 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:27:15.505 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:27:15.505 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:27:15.505 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 01:27:15.505 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 01:27:15.505 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:27:15.505 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 01:27:15.505 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:27:15.505 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:27:15.505 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:27:15.505 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:27:15.505 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:27:15.763 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:27:15.763 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:27:15.763 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:27:15.763 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:27:15.763 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:27:15.763 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:27:15.763 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:27:15.763 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:27:15.763 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:27:15.763 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:27:15.763 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 01:27:15.763 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:27:15.763 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:27:15.763 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:27:15.763 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 01:27:15.763 01:27:15.763 real 0m8.225s 01:27:15.763 user 0m33.928s 01:27:15.763 sys 0m1.601s 01:27:15.763 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:15.763 ************************************ 01:27:15.763 05:22:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:27:15.763 END TEST nvmf_fio_host 01:27:15.763 ************************************ 01:27:15.763 05:22:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 01:27:15.763 05:22:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:27:15.763 05:22:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:15.763 05:22:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:27:15.763 ************************************ 01:27:15.763 START TEST nvmf_failover 01:27:15.763 ************************************ 01:27:15.763 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 01:27:16.021 * Looking for test storage... 01:27:16.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:27:16.021 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:27:16.021 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 01:27:16.021 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:27:16.021 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:27:16.021 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:27:16.021 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 01:27:16.021 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 01:27:16.021 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 01:27:16.021 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 01:27:16.021 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 01:27:16.021 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 01:27:16.021 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 01:27:16.021 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 01:27:16.021 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 01:27:16.021 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:27:16.021 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 01:27:16.021 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 01:27:16.021 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 01:27:16.021 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:27:16.021 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:27:16.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:16.022 --rc genhtml_branch_coverage=1 01:27:16.022 --rc genhtml_function_coverage=1 01:27:16.022 --rc genhtml_legend=1 01:27:16.022 --rc geninfo_all_blocks=1 01:27:16.022 --rc geninfo_unexecuted_blocks=1 01:27:16.022 01:27:16.022 ' 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:27:16.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:16.022 --rc genhtml_branch_coverage=1 01:27:16.022 --rc genhtml_function_coverage=1 01:27:16.022 --rc genhtml_legend=1 01:27:16.022 --rc geninfo_all_blocks=1 01:27:16.022 --rc geninfo_unexecuted_blocks=1 01:27:16.022 01:27:16.022 ' 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:27:16.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:16.022 --rc genhtml_branch_coverage=1 01:27:16.022 --rc genhtml_function_coverage=1 01:27:16.022 --rc genhtml_legend=1 01:27:16.022 --rc geninfo_all_blocks=1 01:27:16.022 --rc geninfo_unexecuted_blocks=1 01:27:16.022 01:27:16.022 ' 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:27:16.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:16.022 --rc genhtml_branch_coverage=1 01:27:16.022 --rc genhtml_function_coverage=1 01:27:16.022 --rc genhtml_legend=1 01:27:16.022 --rc geninfo_all_blocks=1 01:27:16.022 --rc geninfo_unexecuted_blocks=1 01:27:16.022 01:27:16.022 ' 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:27:16.022 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:27:16.022 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:27:16.023 Cannot find device "nvmf_init_br" 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:27:16.023 Cannot find device "nvmf_init_br2" 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:27:16.023 Cannot find device "nvmf_tgt_br" 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:27:16.023 Cannot find device "nvmf_tgt_br2" 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:27:16.023 Cannot find device "nvmf_init_br" 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:27:16.023 Cannot find device "nvmf_init_br2" 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:27:16.023 Cannot find device "nvmf_tgt_br" 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:27:16.023 Cannot find device "nvmf_tgt_br2" 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:27:16.023 Cannot find device "nvmf_br" 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:27:16.023 Cannot find device "nvmf_init_if" 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:27:16.023 Cannot find device "nvmf_init_if2" 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:27:16.023 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:27:16.023 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:27:16.023 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:27:16.281 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:27:16.281 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 01:27:16.281 01:27:16.281 --- 10.0.0.3 ping statistics --- 01:27:16.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:16.281 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:27:16.281 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:27:16.281 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 01:27:16.281 01:27:16.281 --- 10.0.0.4 ping statistics --- 01:27:16.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:16.281 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:27:16.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:27:16.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 01:27:16.281 01:27:16.281 --- 10.0.0.1 ping statistics --- 01:27:16.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:16.281 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:27:16.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:27:16.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 01:27:16.281 01:27:16.281 --- 10.0.0.2 ping statistics --- 01:27:16.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:16.281 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=87542 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 87542 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 87542 ']' 01:27:16.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 01:27:16.281 05:22:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:27:16.281 [2024-12-09 05:22:07.822955] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:16.281 [2024-12-09 05:22:07.822997] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:27:16.538 [2024-12-09 05:22:07.961979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:27:16.538 [2024-12-09 05:22:07.998276] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:27:16.538 [2024-12-09 05:22:07.998308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:27:16.538 [2024-12-09 05:22:07.998314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:27:16.538 [2024-12-09 05:22:07.998319] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:27:16.538 [2024-12-09 05:22:07.998324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:27:16.538 [2024-12-09 05:22:07.999354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:27:16.538 [2024-12-09 05:22:08.000119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:27:16.538 [2024-12-09 05:22:08.000249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:27:17.102 05:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:17.102 05:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 01:27:17.102 05:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:27:17.102 05:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 01:27:17.102 05:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:27:17.359 05:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:27:17.359 05:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:27:17.359 [2024-12-09 05:22:08.908898] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:27:17.359 05:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:27:17.615 Malloc0 01:27:17.615 05:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:27:17.871 05:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:27:18.127 05:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:27:18.127 [2024-12-09 05:22:09.722344] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:27:18.127 05:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:27:18.385 [2024-12-09 05:22:09.926464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:27:18.385 05:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 01:27:18.641 [2024-12-09 05:22:10.122625] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 01:27:18.641 05:22:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 01:27:18.641 05:22:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=87648 01:27:18.641 05:22:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:27:18.641 05:22:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 87648 /var/tmp/bdevperf.sock 01:27:18.641 05:22:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 87648 ']' 01:27:18.641 05:22:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:27:18.641 05:22:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:18.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:27:18.641 05:22:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:27:18.641 05:22:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:18.641 05:22:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:27:19.571 05:22:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:19.571 05:22:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 01:27:19.571 05:22:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:27:19.828 NVMe0n1 01:27:19.828 05:22:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:27:20.084 01:27:20.084 05:22:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=87690 01:27:20.084 05:22:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:27:20.084 05:22:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 01:27:21.010 05:22:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:27:21.272 05:22:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 01:27:24.559 05:22:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:27:24.559 01:27:24.559 05:22:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:27:24.818 [2024-12-09 05:22:16.271851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.271999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.818 [2024-12-09 05:22:16.272190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.819 [2024-12-09 05:22:16.272194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.819 [2024-12-09 05:22:16.272197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.819 [2024-12-09 05:22:16.272201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.819 [2024-12-09 05:22:16.272205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.819 [2024-12-09 05:22:16.272209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.819 [2024-12-09 05:22:16.272212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.819 [2024-12-09 05:22:16.272216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.819 [2024-12-09 05:22:16.272220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.819 [2024-12-09 05:22:16.272223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.819 [2024-12-09 05:22:16.272227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.819 [2024-12-09 05:22:16.272231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.819 [2024-12-09 05:22:16.272235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.819 [2024-12-09 05:22:16.272251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.819 [2024-12-09 05:22:16.272255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.819 [2024-12-09 05:22:16.272259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.819 [2024-12-09 05:22:16.272263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.819 [2024-12-09 05:22:16.272267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.819 [2024-12-09 05:22:16.272271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.819 [2024-12-09 05:22:16.272274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.819 [2024-12-09 05:22:16.272279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d5e0 is same with the state(6) to be set 01:27:24.819 05:22:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 01:27:28.149 05:22:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:27:28.149 [2024-12-09 05:22:19.478139] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:27:28.149 05:22:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 01:27:29.081 05:22:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 01:27:29.081 [2024-12-09 05:22:20.693558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.081 [2024-12-09 05:22:20.693765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.082 [2024-12-09 05:22:20.693943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7630 is same with the state(6) to be set 01:27:29.340 05:22:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 87690 01:27:35.900 { 01:27:35.900 "results": [ 01:27:35.900 { 01:27:35.900 "job": "NVMe0n1", 01:27:35.900 "core_mask": "0x1", 01:27:35.900 "workload": "verify", 01:27:35.900 "status": "finished", 01:27:35.900 "verify_range": { 01:27:35.900 "start": 0, 01:27:35.900 "length": 16384 01:27:35.900 }, 01:27:35.900 "queue_depth": 128, 01:27:35.900 "io_size": 4096, 01:27:35.900 "runtime": 15.005874, 01:27:35.900 "iops": 12339.834387520514, 01:27:35.900 "mibps": 48.20247807625201, 01:27:35.900 "io_failed": 5141, 01:27:35.900 "io_timeout": 0, 01:27:35.900 "avg_latency_us": 10072.398852113725, 01:27:35.900 "min_latency_us": 381.2430769230769, 01:27:35.900 "max_latency_us": 19862.44923076923 01:27:35.900 } 01:27:35.900 ], 01:27:35.900 "core_count": 1 01:27:35.900 } 01:27:35.900 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 87648 01:27:35.900 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 87648 ']' 01:27:35.900 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 87648 01:27:35.900 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 01:27:35.900 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:35.900 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87648 01:27:35.900 killing process with pid 87648 01:27:35.900 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:27:35.900 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:27:35.900 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87648' 01:27:35.900 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 87648 01:27:35.900 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 87648 01:27:35.900 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:27:35.900 [2024-12-09 05:22:10.171684] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:35.900 [2024-12-09 05:22:10.171752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87648 ] 01:27:35.900 [2024-12-09 05:22:10.310647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:35.900 [2024-12-09 05:22:10.345086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:35.900 Running I/O for 15 seconds... 01:27:35.900 13231.00 IOPS, 51.68 MiB/s [2024-12-09T05:22:27.517Z] [2024-12-09 05:22:12.772872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.900 [2024-12-09 05:22:12.772916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.900 [2024-12-09 05:22:12.772930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.900 [2024-12-09 05:22:12.772938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.900 [2024-12-09 05:22:12.772946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.900 [2024-12-09 05:22:12.772953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.900 [2024-12-09 05:22:12.772961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.900 [2024-12-09 05:22:12.772968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.900 [2024-12-09 05:22:12.772976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:118992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.900 [2024-12-09 05:22:12.772983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.900 [2024-12-09 05:22:12.772991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.900 [2024-12-09 05:22:12.772997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.900 [2024-12-09 05:22:12.773005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.900 [2024-12-09 05:22:12.773012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.900 [2024-12-09 05:22:12.773020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:119016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.900 [2024-12-09 05:22:12.773027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.900 [2024-12-09 05:22:12.773035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:119024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.900 [2024-12-09 05:22:12.773041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:119032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.901 [2024-12-09 05:22:12.773056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:119040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.901 [2024-12-09 05:22:12.773071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:118152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:118168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:118176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:118192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:118216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:118256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:118264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:118288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:118296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:118304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:118312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:118328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:118344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:118352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:118376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:118384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:118400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:118408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:118416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:118432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:118440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.901 [2024-12-09 05:22:12.773673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:118448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.901 [2024-12-09 05:22:12.773683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.773693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.773700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.773708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.773715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.773723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.773729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.773737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.773744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.773752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:118488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.773759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.773767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:118496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.773773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.773782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.773789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.773797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.773803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.773811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.773818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.773826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:118528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.773832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.773840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.773847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.773855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.773862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.773876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.773883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.773891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:118560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.773897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.773906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:118568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.773913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.773922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.773928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.773937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.773945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.773953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:118592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.773959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.773968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.773974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.773982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.773989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.773997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.774003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.774012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.774018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.774026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.774033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.774041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:119048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.902 [2024-12-09 05:22:12.774047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.774055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:119056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.902 [2024-12-09 05:22:12.774064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.774073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:119064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.902 [2024-12-09 05:22:12.774079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.774087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:119072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.902 [2024-12-09 05:22:12.774094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.774102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:119080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.902 [2024-12-09 05:22:12.774108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.774116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:119088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.902 [2024-12-09 05:22:12.774123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.774131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:119096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.902 [2024-12-09 05:22:12.774137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.774145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:119104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.902 [2024-12-09 05:22:12.774152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.774160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.902 [2024-12-09 05:22:12.774167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.774176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:119120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.902 [2024-12-09 05:22:12.774183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.774191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.902 [2024-12-09 05:22:12.774197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.774205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:119136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.902 [2024-12-09 05:22:12.774212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.774220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:119144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.902 [2024-12-09 05:22:12.774226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.774234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:119152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.902 [2024-12-09 05:22:12.774240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.774251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.902 [2024-12-09 05:22:12.774258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.774267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:118640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.902 [2024-12-09 05:22:12.774274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.902 [2024-12-09 05:22:12.774281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:118664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:118736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:119168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.903 [2024-12-09 05:22:12.774513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:118776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:118792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:118816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:118824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:118832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:118840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:118848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:118880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:118888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:118944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.903 [2024-12-09 05:22:12.774866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.903 [2024-12-09 05:22:12.774873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4c620 is same with the state(6) to be set 01:27:35.903 [2024-12-09 05:22:12.774881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:27:35.903 [2024-12-09 05:22:12.774886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:27:35.904 [2024-12-09 05:22:12.774891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118952 len:8 PRP1 0x0 PRP2 0x0 01:27:35.904 [2024-12-09 05:22:12.774899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:12.774933] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 01:27:35.904 [2024-12-09 05:22:12.774966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:27:35.904 [2024-12-09 05:22:12.774975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:12.774982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:27:35.904 [2024-12-09 05:22:12.774989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:12.774996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:27:35.904 [2024-12-09 05:22:12.775003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:12.775010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:27:35.904 [2024-12-09 05:22:12.775017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:12.775024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 01:27:35.904 [2024-12-09 05:22:12.777575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:27:35.904 [2024-12-09 05:22:12.777596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadff30 (9): Bad file descriptor 01:27:35.904 [2024-12-09 05:22:12.805558] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 01:27:35.904 12934.00 IOPS, 50.52 MiB/s [2024-12-09T05:22:27.521Z] 12946.00 IOPS, 50.57 MiB/s [2024-12-09T05:22:27.521Z] 12975.50 IOPS, 50.69 MiB/s [2024-12-09T05:22:27.521Z] [2024-12-09 05:22:16.272812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.272844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.272876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.272885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.272894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.272901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.272910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.272917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.272925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.272932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.272940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.272947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.272955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.272961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.272969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.272976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.272984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.272992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.273000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.273007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.273015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.273022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.273030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.273037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.273045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.273051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.273059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.273070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.273079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.273085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.273095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.273101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.273109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.273117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.273126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.273132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.273140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.273148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.273156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.273163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.273171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.273178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.273186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.273193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.273201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.273208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.273216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.273223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.273231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.273238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.273246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.273252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.273264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.904 [2024-12-09 05:22:16.273271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.904 [2024-12-09 05:22:16.273279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.905 [2024-12-09 05:22:16.273286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.905 [2024-12-09 05:22:16.273301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.905 [2024-12-09 05:22:16.273316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.905 [2024-12-09 05:22:16.273331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.905 [2024-12-09 05:22:16.273346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.905 [2024-12-09 05:22:16.273361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.905 [2024-12-09 05:22:16.273376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.905 [2024-12-09 05:22:16.273391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.905 [2024-12-09 05:22:16.273406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.905 [2024-12-09 05:22:16.273421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.905 [2024-12-09 05:22:16.273436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.905 [2024-12-09 05:22:16.273453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.905 [2024-12-09 05:22:16.273468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.905 [2024-12-09 05:22:16.273483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.905 [2024-12-09 05:22:16.273498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.905 [2024-12-09 05:22:16.273513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.905 [2024-12-09 05:22:16.273528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.905 [2024-12-09 05:22:16.273543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.905 [2024-12-09 05:22:16.273558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.905 [2024-12-09 05:22:16.273583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.905 [2024-12-09 05:22:16.273598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.905 [2024-12-09 05:22:16.273613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.905 [2024-12-09 05:22:16.273628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.905 [2024-12-09 05:22:16.273643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.905 [2024-12-09 05:22:16.273661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.905 [2024-12-09 05:22:16.273676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.905 [2024-12-09 05:22:16.273691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.905 [2024-12-09 05:22:16.273705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.905 [2024-12-09 05:22:16.273720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.905 [2024-12-09 05:22:16.273735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.905 [2024-12-09 05:22:16.273750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.905 [2024-12-09 05:22:16.273764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.905 [2024-12-09 05:22:16.273779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.905 [2024-12-09 05:22:16.273797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.905 [2024-12-09 05:22:16.273812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.905 [2024-12-09 05:22:16.273826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.905 [2024-12-09 05:22:16.273841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.905 [2024-12-09 05:22:16.273859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.905 [2024-12-09 05:22:16.273867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.273874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.273882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.273889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.273897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.906 [2024-12-09 05:22:16.273904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.273912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.273919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.273927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.273934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.273942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.273948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.273957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.273964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.273972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.273978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.273986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.273993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.906 [2024-12-09 05:22:16.274443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.906 [2024-12-09 05:22:16.274451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.907 [2024-12-09 05:22:16.274458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.907 [2024-12-09 05:22:16.274473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.907 [2024-12-09 05:22:16.274487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.907 [2024-12-09 05:22:16.274502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.907 [2024-12-09 05:22:16.274518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.907 [2024-12-09 05:22:16.274533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.907 [2024-12-09 05:22:16.274548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.907 [2024-12-09 05:22:16.274568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.907 [2024-12-09 05:22:16.274584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.907 [2024-12-09 05:22:16.274598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.907 [2024-12-09 05:22:16.274616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.907 [2024-12-09 05:22:16.274631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.907 [2024-12-09 05:22:16.274645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.907 [2024-12-09 05:22:16.274660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.907 [2024-12-09 05:22:16.274674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.907 [2024-12-09 05:22:16.274689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.907 [2024-12-09 05:22:16.274703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.907 [2024-12-09 05:22:16.274718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.907 [2024-12-09 05:22:16.274733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.907 [2024-12-09 05:22:16.274747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.907 [2024-12-09 05:22:16.274763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.907 [2024-12-09 05:22:16.274778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.907 [2024-12-09 05:22:16.274793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:27:35.907 [2024-12-09 05:22:16.274819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:27:35.907 [2024-12-09 05:22:16.274824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98248 len:8 PRP1 0x0 PRP2 0x0 01:27:35.907 [2024-12-09 05:22:16.274831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274867] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 01:27:35.907 [2024-12-09 05:22:16.274898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:27:35.907 [2024-12-09 05:22:16.274906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:27:35.907 [2024-12-09 05:22:16.274920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:27:35.907 [2024-12-09 05:22:16.274934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:27:35.907 [2024-12-09 05:22:16.274948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:16.274954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 01:27:35.907 [2024-12-09 05:22:16.277505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:27:35.907 [2024-12-09 05:22:16.277531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadff30 (9): Bad file descriptor 01:27:35.907 [2024-12-09 05:22:16.308395] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 01:27:35.907 12854.20 IOPS, 50.21 MiB/s [2024-12-09T05:22:27.524Z] 12880.50 IOPS, 50.31 MiB/s [2024-12-09T05:22:27.524Z] 12887.43 IOPS, 50.34 MiB/s [2024-12-09T05:22:27.524Z] 12938.62 IOPS, 50.54 MiB/s [2024-12-09T05:22:27.524Z] 12944.78 IOPS, 50.57 MiB/s [2024-12-09T05:22:27.524Z] [2024-12-09 05:22:20.693961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:27:35.907 [2024-12-09 05:22:20.693990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:20.693998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:27:35.907 [2024-12-09 05:22:20.694005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:20.694012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:27:35.907 [2024-12-09 05:22:20.694019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:20.694026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:27:35.907 [2024-12-09 05:22:20.694032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:20.694039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadff30 is same with the state(6) to be set 01:27:35.907 [2024-12-09 05:22:20.694263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.907 [2024-12-09 05:22:20.694274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:20.694286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.907 [2024-12-09 05:22:20.694293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.907 [2024-12-09 05:22:20.694301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.908 [2024-12-09 05:22:20.694894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.908 [2024-12-09 05:22:20.694900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.694908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.909 [2024-12-09 05:22:20.694915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.694923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.909 [2024-12-09 05:22:20.694930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.694938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.909 [2024-12-09 05:22:20.694945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.694953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.909 [2024-12-09 05:22:20.694961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.694971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.909 [2024-12-09 05:22:20.694978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.694986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.909 [2024-12-09 05:22:20.694993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.909 [2024-12-09 05:22:20.695007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.909 [2024-12-09 05:22:20.695023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.909 [2024-12-09 05:22:20.695038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.909 [2024-12-09 05:22:20.695053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.909 [2024-12-09 05:22:20.695071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.909 [2024-12-09 05:22:20.695086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.909 [2024-12-09 05:22:20.695101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.909 [2024-12-09 05:22:20.695116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.909 [2024-12-09 05:22:20.695130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.909 [2024-12-09 05:22:20.695145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.909 [2024-12-09 05:22:20.695160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.909 [2024-12-09 05:22:20.695175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.909 [2024-12-09 05:22:20.695190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.909 [2024-12-09 05:22:20.695205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.909 [2024-12-09 05:22:20.695221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.909 [2024-12-09 05:22:20.695235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.909 [2024-12-09 05:22:20.695250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.909 [2024-12-09 05:22:20.695269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.909 [2024-12-09 05:22:20.695283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.909 [2024-12-09 05:22:20.695298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.909 [2024-12-09 05:22:20.695313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.909 [2024-12-09 05:22:20.695329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.909 [2024-12-09 05:22:20.695343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.909 [2024-12-09 05:22:20.695358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.909 [2024-12-09 05:22:20.695372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.909 [2024-12-09 05:22:20.695381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.910 [2024-12-09 05:22:20.695578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.910 [2024-12-09 05:22:20.695593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.910 [2024-12-09 05:22:20.695608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.910 [2024-12-09 05:22:20.695623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.910 [2024-12-09 05:22:20.695637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.910 [2024-12-09 05:22:20.695656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.910 [2024-12-09 05:22:20.695671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.910 [2024-12-09 05:22:20.695686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.910 [2024-12-09 05:22:20.695957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.910 [2024-12-09 05:22:20.695965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.911 [2024-12-09 05:22:20.695972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.911 [2024-12-09 05:22:20.695980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.911 [2024-12-09 05:22:20.695987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.911 [2024-12-09 05:22:20.695995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.911 [2024-12-09 05:22:20.696001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.911 [2024-12-09 05:22:20.696009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.911 [2024-12-09 05:22:20.696016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.911 [2024-12-09 05:22:20.696024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.911 [2024-12-09 05:22:20.696034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.911 [2024-12-09 05:22:20.696042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.911 [2024-12-09 05:22:20.696049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.911 [2024-12-09 05:22:20.696057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.911 [2024-12-09 05:22:20.696063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.911 [2024-12-09 05:22:20.696071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.911 [2024-12-09 05:22:20.696078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.911 [2024-12-09 05:22:20.696086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.911 [2024-12-09 05:22:20.696093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.911 [2024-12-09 05:22:20.696101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.911 [2024-12-09 05:22:20.696107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.911 [2024-12-09 05:22:20.696115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.911 [2024-12-09 05:22:20.696122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.911 [2024-12-09 05:22:20.696132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:35.911 [2024-12-09 05:22:20.696139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.911 [2024-12-09 05:22:20.696147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.911 [2024-12-09 05:22:20.696154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.911 [2024-12-09 05:22:20.696162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.911 [2024-12-09 05:22:20.696168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.911 [2024-12-09 05:22:20.696176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.911 [2024-12-09 05:22:20.696182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.911 [2024-12-09 05:22:20.696192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.911 [2024-12-09 05:22:20.696199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.911 [2024-12-09 05:22:20.696207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.911 [2024-12-09 05:22:20.696214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.911 [2024-12-09 05:22:20.696225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:35.911 [2024-12-09 05:22:20.696231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.911 [2024-12-09 05:22:20.696239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e8d0 is same with the state(6) to be set 01:27:35.911 [2024-12-09 05:22:20.696254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:27:35.911 [2024-12-09 05:22:20.696259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:27:35.911 [2024-12-09 05:22:20.696264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46336 len:8 PRP1 0x0 PRP2 0x0 01:27:35.911 [2024-12-09 05:22:20.696271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:35.911 [2024-12-09 05:22:20.696305] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 01:27:35.911 [2024-12-09 05:22:20.696314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 01:27:35.911 [2024-12-09 05:22:20.698852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 01:27:35.911 [2024-12-09 05:22:20.698874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadff30 (9): Bad file descriptor 01:27:35.911 [2024-12-09 05:22:20.725475] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 01:27:35.911 12928.30 IOPS, 50.50 MiB/s [2024-12-09T05:22:27.528Z] 12739.91 IOPS, 49.77 MiB/s [2024-12-09T05:22:27.528Z] 12777.83 IOPS, 49.91 MiB/s [2024-12-09T05:22:27.528Z] 12647.00 IOPS, 49.40 MiB/s [2024-12-09T05:22:27.528Z] 12459.57 IOPS, 48.67 MiB/s [2024-12-09T05:22:27.528Z] 12339.67 IOPS, 48.20 MiB/s 01:27:35.911 Latency(us) 01:27:35.911 [2024-12-09T05:22:27.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:35.911 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:27:35.911 Verification LBA range: start 0x0 length 0x4000 01:27:35.911 NVMe0n1 : 15.01 12339.83 48.20 342.60 0.00 10072.40 381.24 19862.45 01:27:35.911 [2024-12-09T05:22:27.528Z] =================================================================================================================== 01:27:35.911 [2024-12-09T05:22:27.528Z] Total : 12339.83 48.20 342.60 0.00 10072.40 381.24 19862.45 01:27:35.911 Received shutdown signal, test time was about 15.000000 seconds 01:27:35.911 01:27:35.911 Latency(us) 01:27:35.911 [2024-12-09T05:22:27.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:35.911 [2024-12-09T05:22:27.528Z] =================================================================================================================== 01:27:35.911 [2024-12-09T05:22:27.528Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:27:35.911 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 01:27:35.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:27:35.911 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 01:27:35.911 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 01:27:35.911 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=87900 01:27:35.911 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 87900 /var/tmp/bdevperf.sock 01:27:35.911 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 87900 ']' 01:27:35.911 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 01:27:35.911 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:27:35.911 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:35.911 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:27:35.911 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:35.911 05:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:27:36.169 05:22:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:36.169 05:22:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 01:27:36.169 05:22:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:27:36.425 [2024-12-09 05:22:27.938230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:27:36.425 05:22:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 01:27:36.729 [2024-12-09 05:22:28.134354] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 01:27:36.729 05:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:27:37.014 NVMe0n1 01:27:37.014 05:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:27:37.271 01:27:37.271 05:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:27:37.529 01:27:37.529 05:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:27:37.529 05:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 01:27:37.529 05:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:27:37.787 05:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 01:27:41.066 05:22:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:27:41.066 05:22:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 01:27:41.067 05:22:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:27:41.067 05:22:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88039 01:27:41.067 05:22:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 88039 01:27:42.442 { 01:27:42.442 "results": [ 01:27:42.442 { 01:27:42.442 "job": "NVMe0n1", 01:27:42.442 "core_mask": "0x1", 01:27:42.442 "workload": "verify", 01:27:42.442 "status": "finished", 01:27:42.442 "verify_range": { 01:27:42.442 "start": 0, 01:27:42.442 "length": 16384 01:27:42.442 }, 01:27:42.442 "queue_depth": 128, 01:27:42.442 "io_size": 4096, 01:27:42.442 "runtime": 1.005671, 01:27:42.442 "iops": 10442.77900028936, 01:27:42.442 "mibps": 40.79210546988031, 01:27:42.442 "io_failed": 0, 01:27:42.442 "io_timeout": 0, 01:27:42.442 "avg_latency_us": 12205.757782693407, 01:27:42.442 "min_latency_us": 1701.4153846153847, 01:27:42.442 "max_latency_us": 49000.763076923075 01:27:42.442 } 01:27:42.442 ], 01:27:42.442 "core_count": 1 01:27:42.442 } 01:27:42.442 05:22:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:27:42.442 [2024-12-09 05:22:26.892972] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:42.442 [2024-12-09 05:22:26.893456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87900 ] 01:27:42.442 [2024-12-09 05:22:27.028994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:42.442 [2024-12-09 05:22:27.058444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:42.442 [2024-12-09 05:22:29.324471] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 01:27:42.442 [2024-12-09 05:22:29.324538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:27:42.442 [2024-12-09 05:22:29.324550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:42.442 [2024-12-09 05:22:29.324559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:27:42.442 [2024-12-09 05:22:29.324574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:42.442 [2024-12-09 05:22:29.324581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:27:42.442 [2024-12-09 05:22:29.324588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:42.442 [2024-12-09 05:22:29.324595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:27:42.442 [2024-12-09 05:22:29.324602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:42.442 [2024-12-09 05:22:29.324608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 01:27:42.442 [2024-12-09 05:22:29.324634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 01:27:42.442 [2024-12-09 05:22:29.324649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b0f30 (9): Bad file descriptor 01:27:42.442 [2024-12-09 05:22:29.328147] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 01:27:42.442 Running I/O for 1 seconds... 01:27:42.442 10361.00 IOPS, 40.47 MiB/s 01:27:42.442 Latency(us) 01:27:42.442 [2024-12-09T05:22:34.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:42.442 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:27:42.442 Verification LBA range: start 0x0 length 0x4000 01:27:42.442 NVMe0n1 : 1.01 10442.78 40.79 0.00 0.00 12205.76 1701.42 49000.76 01:27:42.442 [2024-12-09T05:22:34.059Z] =================================================================================================================== 01:27:42.442 [2024-12-09T05:22:34.059Z] Total : 10442.78 40.79 0.00 0.00 12205.76 1701.42 49000.76 01:27:42.442 05:22:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:27:42.442 05:22:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 01:27:42.442 05:22:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:27:42.700 05:22:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:27:42.700 05:22:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 01:27:42.700 05:22:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:27:42.958 05:22:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 01:27:46.233 05:22:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:27:46.233 05:22:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 01:27:46.233 05:22:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 87900 01:27:46.233 05:22:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 87900 ']' 01:27:46.233 05:22:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 87900 01:27:46.233 05:22:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 01:27:46.233 05:22:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:46.233 05:22:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87900 01:27:46.233 killing process with pid 87900 01:27:46.233 05:22:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:27:46.233 05:22:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:27:46.233 05:22:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87900' 01:27:46.233 05:22:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 87900 01:27:46.233 05:22:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 87900 01:27:46.233 05:22:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 01:27:46.489 05:22:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:27:46.489 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 01:27:46.489 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:27:46.489 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 01:27:46.489 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 01:27:46.489 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 01:27:46.489 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:27:46.489 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 01:27:46.489 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 01:27:46.489 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:27:46.489 rmmod nvme_tcp 01:27:46.489 rmmod nvme_fabrics 01:27:46.489 rmmod nvme_keyring 01:27:46.489 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:27:46.490 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 01:27:46.490 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 01:27:46.490 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 87542 ']' 01:27:46.490 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 87542 01:27:46.490 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 87542 ']' 01:27:46.490 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 87542 01:27:46.490 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 01:27:46.490 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:46.490 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87542 01:27:46.490 killing process with pid 87542 01:27:46.490 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:27:46.490 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:27:46.490 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87542' 01:27:46.490 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 87542 01:27:46.490 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 87542 01:27:46.746 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:27:46.746 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:27:46.746 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:27:46.746 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 01:27:46.746 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 01:27:46.746 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:27:46.746 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 01:27:46.746 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:27:46.746 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:27:46.746 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:27:46.746 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:27:46.746 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:27:46.746 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:27:46.746 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:27:46.746 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:27:46.746 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:27:46.746 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:27:46.746 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:27:46.746 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:27:47.003 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:27:47.003 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:27:47.003 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:27:47.003 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 01:27:47.003 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:27:47.003 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:27:47.003 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:27:47.003 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 01:27:47.003 ************************************ 01:27:47.003 END TEST nvmf_failover 01:27:47.003 ************************************ 01:27:47.003 01:27:47.003 real 0m31.113s 01:27:47.003 user 2m1.462s 01:27:47.003 sys 0m3.098s 01:27:47.003 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:47.003 05:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:27:47.003 05:22:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 01:27:47.003 05:22:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:27:47.003 05:22:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:47.003 05:22:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:27:47.003 ************************************ 01:27:47.003 START TEST nvmf_host_discovery 01:27:47.003 ************************************ 01:27:47.004 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 01:27:47.004 * Looking for test storage... 01:27:47.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:27:47.004 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:27:47.004 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:27:47.004 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:27:47.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:47.262 --rc genhtml_branch_coverage=1 01:27:47.262 --rc genhtml_function_coverage=1 01:27:47.262 --rc genhtml_legend=1 01:27:47.262 --rc geninfo_all_blocks=1 01:27:47.262 --rc geninfo_unexecuted_blocks=1 01:27:47.262 01:27:47.262 ' 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:27:47.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:47.262 --rc genhtml_branch_coverage=1 01:27:47.262 --rc genhtml_function_coverage=1 01:27:47.262 --rc genhtml_legend=1 01:27:47.262 --rc geninfo_all_blocks=1 01:27:47.262 --rc geninfo_unexecuted_blocks=1 01:27:47.262 01:27:47.262 ' 01:27:47.262 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:27:47.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:47.262 --rc genhtml_branch_coverage=1 01:27:47.262 --rc genhtml_function_coverage=1 01:27:47.262 --rc genhtml_legend=1 01:27:47.262 --rc geninfo_all_blocks=1 01:27:47.263 --rc geninfo_unexecuted_blocks=1 01:27:47.263 01:27:47.263 ' 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:27:47.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:47.263 --rc genhtml_branch_coverage=1 01:27:47.263 --rc genhtml_function_coverage=1 01:27:47.263 --rc genhtml_legend=1 01:27:47.263 --rc geninfo_all_blocks=1 01:27:47.263 --rc geninfo_unexecuted_blocks=1 01:27:47.263 01:27:47.263 ' 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:27:47.263 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:27:47.263 Cannot find device "nvmf_init_br" 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:27:47.263 Cannot find device "nvmf_init_br2" 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:27:47.263 Cannot find device "nvmf_tgt_br" 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:27:47.263 Cannot find device "nvmf_tgt_br2" 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 01:27:47.263 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:27:47.263 Cannot find device "nvmf_init_br" 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:27:47.264 Cannot find device "nvmf_init_br2" 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:27:47.264 Cannot find device "nvmf_tgt_br" 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:27:47.264 Cannot find device "nvmf_tgt_br2" 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:27:47.264 Cannot find device "nvmf_br" 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:27:47.264 Cannot find device "nvmf_init_if" 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:27:47.264 Cannot find device "nvmf_init_if2" 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:27:47.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:27:47.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:27:47.264 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:27:47.521 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:27:47.521 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 01:27:47.521 01:27:47.521 --- 10.0.0.3 ping statistics --- 01:27:47.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:47.521 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:27:47.521 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:27:47.521 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 01:27:47.521 01:27:47.521 --- 10.0.0.4 ping statistics --- 01:27:47.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:47.521 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:27:47.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:27:47.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 01:27:47.521 01:27:47.521 --- 10.0.0.1 ping statistics --- 01:27:47.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:47.521 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:27:47.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:27:47.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.032 ms 01:27:47.521 01:27:47.521 --- 10.0.0.2 ping statistics --- 01:27:47.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:47.521 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 01:27:47.521 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:27:47.522 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 01:27:47.522 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:27:47.522 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:27:47.522 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:27:47.522 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:27:47.522 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:27:47.522 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:27:47.522 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:27:47.522 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 01:27:47.522 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:27:47.522 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 01:27:47.522 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:47.522 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=88390 01:27:47.522 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 88390 01:27:47.522 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:27:47.522 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 88390 ']' 01:27:47.522 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:27:47.522 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:47.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:27:47.522 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:27:47.522 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:47.522 05:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:47.522 [2024-12-09 05:22:39.032169] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:47.522 [2024-12-09 05:22:39.032228] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:27:47.780 [2024-12-09 05:22:39.169288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:47.780 [2024-12-09 05:22:39.204175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:27:47.780 [2024-12-09 05:22:39.204211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:27:47.780 [2024-12-09 05:22:39.204220] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:27:47.780 [2024-12-09 05:22:39.204227] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:27:47.780 [2024-12-09 05:22:39.204232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:27:47.780 [2024-12-09 05:22:39.204554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:27:48.345 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:48.345 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 01:27:48.345 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:27:48.345 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 01:27:48.345 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:48.345 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:27:48.345 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:27:48.345 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:48.345 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:48.345 [2024-12-09 05:22:39.958258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:48.603 [2024-12-09 05:22:39.966339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:48.603 null0 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:48.603 null1 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88440 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88440 /tmp/host.sock 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 88440 ']' 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 01:27:48.603 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:48.603 05:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 01:27:48.603 [2024-12-09 05:22:40.033548] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:48.603 [2024-12-09 05:22:40.033625] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88440 ] 01:27:48.603 [2024-12-09 05:22:40.170984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:48.603 [2024-12-09 05:22:40.206666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:49.536 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 01:27:49.537 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 01:27:49.537 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:27:49.537 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:27:49.537 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:27:49.537 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:49.537 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:27:49.537 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:49.537 05:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:49.537 [2024-12-09 05:22:41.098546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:27:49.537 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 01:27:49.795 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 01:27:50.359 [2024-12-09 05:22:41.880723] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:27:50.359 [2024-12-09 05:22:41.880835] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:27:50.359 [2024-12-09 05:22:41.880871] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:27:50.359 [2024-12-09 05:22:41.966842] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 01:27:50.615 [2024-12-09 05:22:42.021211] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 01:27:50.615 [2024-12-09 05:22:42.021878] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1471ba0:1 started. 01:27:50.615 [2024-12-09 05:22:42.023476] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 01:27:50.615 [2024-12-09 05:22:42.023606] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:27:50.615 [2024-12-09 05:22:42.029470] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1471ba0 was disconnected and freed. delete nvme_qpair. 01:27:50.873 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:27:50.873 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:27:50.873 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:27:50.873 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:27:50.873 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:27:50.873 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:50.873 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:27:50.873 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:27:50.873 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:50.873 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:50.873 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:27:50.873 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:27:50.873 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 01:27:50.873 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 01:27:50.873 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:27:50.873 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:27:50.874 [2024-12-09 05:22:42.382138] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1471fb0:1 started. 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:27:50.874 [2024-12-09 05:22:42.389576] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1471fb0 was disconnected and freed. delete nvme_qpair. 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:50.874 [2024-12-09 05:22:42.442877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:27:50.874 [2024-12-09 05:22:42.443676] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:27:50.874 [2024-12-09 05:22:42.443696] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:27:50.874 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:50.875 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:50.875 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:27:50.875 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:27:51.131 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:51.131 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:27:51.131 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:27:51.131 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 01:27:51.131 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 01:27:51.131 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:27:51.131 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:27:51.132 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 01:27:51.132 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 01:27:51.132 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:27:51.132 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:51.132 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:51.132 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:27:51.132 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:27:51.132 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:27:51.132 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:51.132 [2024-12-09 05:22:42.530298] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 01:27:51.132 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 01:27:51.132 05:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 01:27:51.132 [2024-12-09 05:22:42.594698] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 01:27:51.132 [2024-12-09 05:22:42.594734] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 01:27:51.132 [2024-12-09 05:22:42.594741] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:27:51.132 [2024-12-09 05:22:42.594745] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 01:27:52.120 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:27:52.120 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 01:27:52.120 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 01:27:52.120 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:27:52.120 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:27:52.120 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:27:52.120 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:52.120 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:52.120 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:27:52.120 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:52.120 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 01:27:52.120 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:27:52.120 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 01:27:52.120 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:52.121 [2024-12-09 05:22:43.620135] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:27:52.121 [2024-12-09 05:22:43.620159] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:27:52.121 [2024-12-09 05:22:43.621366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:27:52.121 [2024-12-09 05:22:43.621389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:52.121 [2024-12-09 05:22:43.621395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:27:52.121 [2024-12-09 05:22:43.621400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:52.121 [2024-12-09 05:22:43.621406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:27:52.121 [2024-12-09 05:22:43.621411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:52.121 [2024-12-09 05:22:43.621416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:27:52.121 [2024-12-09 05:22:43.621420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:52.121 [2024-12-09 05:22:43.621425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444290 is same with the state(6) to be set 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:27:52.121 [2024-12-09 05:22:43.631344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1444290 (9): Bad file descriptor 01:27:52.121 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:52.121 [2024-12-09 05:22:43.641353] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:27:52.121 [2024-12-09 05:22:43.641366] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:27:52.121 [2024-12-09 05:22:43.641369] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:27:52.121 [2024-12-09 05:22:43.641372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:27:52.121 [2024-12-09 05:22:43.641388] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:27:52.121 [2024-12-09 05:22:43.641436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:27:52.121 [2024-12-09 05:22:43.641445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1444290 with addr=10.0.0.3, port=4420 01:27:52.121 [2024-12-09 05:22:43.641451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444290 is same with the state(6) to be set 01:27:52.121 [2024-12-09 05:22:43.641458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1444290 (9): Bad file descriptor 01:27:52.121 [2024-12-09 05:22:43.641466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:27:52.121 [2024-12-09 05:22:43.641470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:27:52.121 [2024-12-09 05:22:43.641475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:27:52.121 [2024-12-09 05:22:43.641480] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:27:52.121 [2024-12-09 05:22:43.641483] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:27:52.121 [2024-12-09 05:22:43.641485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:27:52.121 [2024-12-09 05:22:43.651394] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:27:52.121 [2024-12-09 05:22:43.651407] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:27:52.121 [2024-12-09 05:22:43.651409] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:27:52.121 [2024-12-09 05:22:43.651412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:27:52.121 [2024-12-09 05:22:43.651424] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:27:52.121 [2024-12-09 05:22:43.651449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:27:52.121 [2024-12-09 05:22:43.651457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1444290 with addr=10.0.0.3, port=4420 01:27:52.121 [2024-12-09 05:22:43.651462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444290 is same with the state(6) to be set 01:27:52.121 [2024-12-09 05:22:43.651469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1444290 (9): Bad file descriptor 01:27:52.121 [2024-12-09 05:22:43.651475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:27:52.121 [2024-12-09 05:22:43.651479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:27:52.122 [2024-12-09 05:22:43.651483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:27:52.122 [2024-12-09 05:22:43.651487] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:27:52.122 [2024-12-09 05:22:43.651489] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:27:52.122 [2024-12-09 05:22:43.651491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:27:52.122 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:27:52.122 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:27:52.122 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:27:52.122 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:27:52.122 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:27:52.122 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:27:52.122 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:27:52.122 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:27:52.122 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:27:52.122 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:27:52.122 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:27:52.122 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:52.122 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:52.122 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:27:52.122 [2024-12-09 05:22:43.661431] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:27:52.122 [2024-12-09 05:22:43.661444] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:27:52.122 [2024-12-09 05:22:43.661446] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:27:52.122 [2024-12-09 05:22:43.661449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:27:52.122 [2024-12-09 05:22:43.661461] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:27:52.122 [2024-12-09 05:22:43.661485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:27:52.122 [2024-12-09 05:22:43.661493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1444290 with addr=10.0.0.3, port=4420 01:27:52.122 [2024-12-09 05:22:43.661497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444290 is same with the state(6) to be set 01:27:52.122 [2024-12-09 05:22:43.661504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1444290 (9): Bad file descriptor 01:27:52.122 [2024-12-09 05:22:43.661511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:27:52.122 [2024-12-09 05:22:43.661515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:27:52.122 [2024-12-09 05:22:43.661519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:27:52.122 [2024-12-09 05:22:43.661522] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:27:52.122 [2024-12-09 05:22:43.661525] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:27:52.122 [2024-12-09 05:22:43.661527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:27:52.122 [2024-12-09 05:22:43.671467] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:27:52.122 [2024-12-09 05:22:43.671481] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:27:52.122 [2024-12-09 05:22:43.671483] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:27:52.122 [2024-12-09 05:22:43.671486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:27:52.122 [2024-12-09 05:22:43.671498] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:27:52.122 [2024-12-09 05:22:43.671524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:27:52.122 [2024-12-09 05:22:43.671531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1444290 with addr=10.0.0.3, port=4420 01:27:52.122 [2024-12-09 05:22:43.671536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444290 is same with the state(6) to be set 01:27:52.122 [2024-12-09 05:22:43.671543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1444290 (9): Bad file descriptor 01:27:52.122 [2024-12-09 05:22:43.671550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:27:52.122 [2024-12-09 05:22:43.671554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:27:52.122 [2024-12-09 05:22:43.671559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:27:52.122 [2024-12-09 05:22:43.671571] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:27:52.122 [2024-12-09 05:22:43.671573] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:27:52.122 [2024-12-09 05:22:43.671575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:27:52.122 [2024-12-09 05:22:43.681504] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:27:52.122 [2024-12-09 05:22:43.681517] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:27:52.122 [2024-12-09 05:22:43.681520] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:27:52.122 [2024-12-09 05:22:43.681522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:27:52.122 [2024-12-09 05:22:43.681534] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:27:52.122 [2024-12-09 05:22:43.681556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:27:52.122 [2024-12-09 05:22:43.681569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1444290 with addr=10.0.0.3, port=4420 01:27:52.122 [2024-12-09 05:22:43.681573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444290 is same with the state(6) to be set 01:27:52.122 [2024-12-09 05:22:43.681580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1444290 (9): Bad file descriptor 01:27:52.122 [2024-12-09 05:22:43.681587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:27:52.122 [2024-12-09 05:22:43.681590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:27:52.122 [2024-12-09 05:22:43.681595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:27:52.122 [2024-12-09 05:22:43.681598] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:27:52.122 [2024-12-09 05:22:43.681601] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:27:52.122 [2024-12-09 05:22:43.681603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:27:52.122 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:52.122 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:27:52.122 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:27:52.122 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 01:27:52.122 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 01:27:52.123 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:27:52.123 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:27:52.123 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 01:27:52.123 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 01:27:52.123 [2024-12-09 05:22:43.691539] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:27:52.123 [2024-12-09 05:22:43.691552] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:27:52.123 [2024-12-09 05:22:43.691554] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:27:52.123 [2024-12-09 05:22:43.691557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:27:52.123 [2024-12-09 05:22:43.691573] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:27:52.123 [2024-12-09 05:22:43.691593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:27:52.123 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:27:52.123 [2024-12-09 05:22:43.691600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1444290 with addr=10.0.0.3, port=4420 01:27:52.123 [2024-12-09 05:22:43.691605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444290 is same with the state(6) to be set 01:27:52.123 [2024-12-09 05:22:43.691611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1444290 (9): Bad file descriptor 01:27:52.123 [2024-12-09 05:22:43.691618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:27:52.123 [2024-12-09 05:22:43.691621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:27:52.123 [2024-12-09 05:22:43.691625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:27:52.123 [2024-12-09 05:22:43.691629] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:27:52.123 [2024-12-09 05:22:43.691631] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:27:52.123 [2024-12-09 05:22:43.691633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:27:52.123 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:27:52.123 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:52.123 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:52.123 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:27:52.123 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:27:52.123 [2024-12-09 05:22:43.701578] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:27:52.123 [2024-12-09 05:22:43.701589] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:27:52.123 [2024-12-09 05:22:43.701591] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:27:52.123 [2024-12-09 05:22:43.701594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:27:52.123 [2024-12-09 05:22:43.701607] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:27:52.123 [2024-12-09 05:22:43.701630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:27:52.123 [2024-12-09 05:22:43.701637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1444290 with addr=10.0.0.3, port=4420 01:27:52.123 [2024-12-09 05:22:43.701641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1444290 is same with the state(6) to be set 01:27:52.123 [2024-12-09 05:22:43.701648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1444290 (9): Bad file descriptor 01:27:52.123 [2024-12-09 05:22:43.701654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:27:52.123 [2024-12-09 05:22:43.701658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:27:52.123 [2024-12-09 05:22:43.701662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:27:52.123 [2024-12-09 05:22:43.701665] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:27:52.123 [2024-12-09 05:22:43.701668] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:27:52.123 [2024-12-09 05:22:43.701670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:27:52.123 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:52.123 [2024-12-09 05:22:43.706178] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 01:27:52.123 [2024-12-09 05:22:43.706194] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 01:27:52.123 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 01:27:52.123 05:22:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:53.497 05:22:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:54.428 [2024-12-09 05:22:45.911018] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:27:54.428 [2024-12-09 05:22:45.911036] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:27:54.428 [2024-12-09 05:22:45.911045] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:27:54.428 [2024-12-09 05:22:45.997102] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 01:27:54.687 [2024-12-09 05:22:46.055307] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 01:27:54.687 [2024-12-09 05:22:46.055663] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x13ecdc0:1 started. 01:27:54.687 [2024-12-09 05:22:46.057095] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 01:27:54.687 [2024-12-09 05:22:46.057123] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:27:54.687 [2024-12-09 05:22:46.060173] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x13ecdc0 was disconnected and freed. delete nvme_qpair. 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:54.687 2024/12/09 05:22:46 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 01:27:54.687 request: 01:27:54.687 { 01:27:54.687 "method": "bdev_nvme_start_discovery", 01:27:54.687 "params": { 01:27:54.687 "name": "nvme", 01:27:54.687 "trtype": "tcp", 01:27:54.687 "traddr": "10.0.0.3", 01:27:54.687 "adrfam": "ipv4", 01:27:54.687 "trsvcid": "8009", 01:27:54.687 "hostnqn": "nqn.2021-12.io.spdk:test", 01:27:54.687 "wait_for_attach": true 01:27:54.687 } 01:27:54.687 } 01:27:54.687 Got JSON-RPC error response 01:27:54.687 GoRPCClient: error on JSON-RPC call 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:54.687 2024/12/09 05:22:46 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 01:27:54.687 request: 01:27:54.687 { 01:27:54.687 "method": "bdev_nvme_start_discovery", 01:27:54.687 "params": { 01:27:54.687 "name": "nvme_second", 01:27:54.687 "trtype": "tcp", 01:27:54.687 "traddr": "10.0.0.3", 01:27:54.687 "adrfam": "ipv4", 01:27:54.687 "trsvcid": "8009", 01:27:54.687 "hostnqn": "nqn.2021-12.io.spdk:test", 01:27:54.687 "wait_for_attach": true 01:27:54.687 } 01:27:54.687 } 01:27:54.687 Got JSON-RPC error response 01:27:54.687 GoRPCClient: error on JSON-RPC call 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:27:54.687 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:54.688 05:22:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:55.623 [2024-12-09 05:22:47.222197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:27:55.623 [2024-12-09 05:22:47.222230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147db50 with addr=10.0.0.3, port=8010 01:27:55.623 [2024-12-09 05:22:47.222240] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:27:55.623 [2024-12-09 05:22:47.222246] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:27:55.623 [2024-12-09 05:22:47.222250] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 01:27:56.995 [2024-12-09 05:22:48.222197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:27:56.995 [2024-12-09 05:22:48.222227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a9e50 with addr=10.0.0.3, port=8010 01:27:56.995 [2024-12-09 05:22:48.222238] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:27:56.995 [2024-12-09 05:22:48.222243] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:27:56.995 [2024-12-09 05:22:48.222248] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 01:27:57.611 [2024-12-09 05:22:49.222146] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 01:27:57.611 2024/12/09 05:22:49 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 01:27:57.611 request: 01:27:57.611 { 01:27:57.611 "method": "bdev_nvme_start_discovery", 01:27:57.611 "params": { 01:27:57.611 "name": "nvme_second", 01:27:57.611 "trtype": "tcp", 01:27:57.611 "traddr": "10.0.0.3", 01:27:57.611 "adrfam": "ipv4", 01:27:57.611 "trsvcid": "8010", 01:27:57.611 "hostnqn": "nqn.2021-12.io.spdk:test", 01:27:57.611 "wait_for_attach": false, 01:27:57.611 "attach_timeout_ms": 3000 01:27:57.611 } 01:27:57.611 } 01:27:57.611 Got JSON-RPC error response 01:27:57.611 GoRPCClient: error on JSON-RPC call 01:27:57.611 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:27:57.611 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 01:27:57.611 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:27:57.611 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:27:57.611 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88440 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:27:57.869 rmmod nvme_tcp 01:27:57.869 rmmod nvme_fabrics 01:27:57.869 rmmod nvme_keyring 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 88390 ']' 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 88390 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 88390 ']' 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 88390 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88390 01:27:57.869 killing process with pid 88390 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88390' 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 88390 01:27:57.869 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 88390 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 01:27:58.127 01:27:58.127 real 0m11.203s 01:27:58.127 user 0m21.125s 01:27:58.127 sys 0m1.307s 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:58.127 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:27:58.127 ************************************ 01:27:58.127 END TEST nvmf_host_discovery 01:27:58.127 ************************************ 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:27:58.386 ************************************ 01:27:58.386 START TEST nvmf_host_multipath_status 01:27:58.386 ************************************ 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 01:27:58.386 * Looking for test storage... 01:27:58.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:27:58.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:58.386 --rc genhtml_branch_coverage=1 01:27:58.386 --rc genhtml_function_coverage=1 01:27:58.386 --rc genhtml_legend=1 01:27:58.386 --rc geninfo_all_blocks=1 01:27:58.386 --rc geninfo_unexecuted_blocks=1 01:27:58.386 01:27:58.386 ' 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:27:58.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:58.386 --rc genhtml_branch_coverage=1 01:27:58.386 --rc genhtml_function_coverage=1 01:27:58.386 --rc genhtml_legend=1 01:27:58.386 --rc geninfo_all_blocks=1 01:27:58.386 --rc geninfo_unexecuted_blocks=1 01:27:58.386 01:27:58.386 ' 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:27:58.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:58.386 --rc genhtml_branch_coverage=1 01:27:58.386 --rc genhtml_function_coverage=1 01:27:58.386 --rc genhtml_legend=1 01:27:58.386 --rc geninfo_all_blocks=1 01:27:58.386 --rc geninfo_unexecuted_blocks=1 01:27:58.386 01:27:58.386 ' 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:27:58.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:58.386 --rc genhtml_branch_coverage=1 01:27:58.386 --rc genhtml_function_coverage=1 01:27:58.386 --rc genhtml_legend=1 01:27:58.386 --rc geninfo_all_blocks=1 01:27:58.386 --rc geninfo_unexecuted_blocks=1 01:27:58.386 01:27:58.386 ' 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:27:58.386 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:27:58.387 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:27:58.387 Cannot find device "nvmf_init_br" 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:27:58.387 Cannot find device "nvmf_init_br2" 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:27:58.387 Cannot find device "nvmf_tgt_br" 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 01:27:58.387 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:27:58.388 Cannot find device "nvmf_tgt_br2" 01:27:58.388 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 01:27:58.388 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:27:58.388 Cannot find device "nvmf_init_br" 01:27:58.388 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 01:27:58.388 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:27:58.388 Cannot find device "nvmf_init_br2" 01:27:58.388 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 01:27:58.388 05:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:27:58.646 Cannot find device "nvmf_tgt_br" 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:27:58.646 Cannot find device "nvmf_tgt_br2" 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:27:58.646 Cannot find device "nvmf_br" 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:27:58.646 Cannot find device "nvmf_init_if" 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:27:58.646 Cannot find device "nvmf_init_if2" 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:27:58.646 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:27:58.646 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:27:58.646 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:27:58.646 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 01:27:58.646 01:27:58.646 --- 10.0.0.3 ping statistics --- 01:27:58.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:58.646 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:27:58.646 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:27:58.646 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 01:27:58.646 01:27:58.646 --- 10.0.0.4 ping statistics --- 01:27:58.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:58.646 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:27:58.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:27:58.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 01:27:58.646 01:27:58.646 --- 10.0.0.1 ping statistics --- 01:27:58.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:58.646 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:27:58.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:27:58.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 01:27:58.646 01:27:58.646 --- 10.0.0.2 ping statistics --- 01:27:58.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:58.646 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:27:58.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=88982 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 88982 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 88982 ']' 01:27:58.646 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:27:58.647 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:27:58.647 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:58.647 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:27:58.647 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:58.647 05:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:27:58.903 [2024-12-09 05:22:50.287476] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:27:58.903 [2024-12-09 05:22:50.287650] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:27:58.903 [2024-12-09 05:22:50.420252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:27:58.903 [2024-12-09 05:22:50.449990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:27:58.903 [2024-12-09 05:22:50.450019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:27:58.903 [2024-12-09 05:22:50.450024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:27:58.903 [2024-12-09 05:22:50.450028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:27:58.903 [2024-12-09 05:22:50.450031] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:27:58.903 [2024-12-09 05:22:50.450603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:27:58.903 [2024-12-09 05:22:50.450658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:59.534 05:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:59.534 05:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 01:27:59.534 05:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:27:59.534 05:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 01:27:59.534 05:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:27:59.791 05:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:27:59.791 05:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=88982 01:27:59.791 05:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:27:59.791 [2024-12-09 05:22:51.336555] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:27:59.792 05:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:28:00.048 Malloc0 01:28:00.048 05:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 01:28:00.305 05:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:28:00.562 05:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:28:00.562 [2024-12-09 05:22:52.160074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:28:00.819 05:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:28:00.819 [2024-12-09 05:22:52.364134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:28:00.819 05:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=89080 01:28:00.819 05:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 01:28:00.819 05:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:28:00.819 05:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 89080 /var/tmp/bdevperf.sock 01:28:00.819 05:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 89080 ']' 01:28:00.819 05:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:28:00.819 05:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 01:28:00.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:28:00.819 05:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:28:00.819 05:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 01:28:00.819 05:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:28:01.748 05:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:28:01.748 05:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 01:28:01.748 05:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:28:02.006 05:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:28:02.264 Nvme0n1 01:28:02.264 05:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:28:02.521 Nvme0n1 01:28:02.779 05:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 01:28:02.779 05:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 01:28:04.676 05:22:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 01:28:04.676 05:22:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 01:28:04.933 05:22:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:28:05.191 05:22:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 01:28:06.123 05:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 01:28:06.123 05:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:28:06.123 05:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:06.123 05:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:28:06.381 05:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:06.381 05:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:28:06.381 05:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:28:06.381 05:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:06.639 05:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:28:06.639 05:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:28:06.639 05:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:28:06.639 05:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:06.639 05:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:06.639 05:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:28:06.639 05:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:28:06.639 05:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:06.896 05:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:06.896 05:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:28:06.896 05:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:06.896 05:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:28:07.154 05:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:07.154 05:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:28:07.154 05:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:07.154 05:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:28:07.411 05:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:07.411 05:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 01:28:07.411 05:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:28:07.668 05:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:28:07.668 05:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 01:28:09.037 05:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 01:28:09.037 05:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:28:09.037 05:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:09.037 05:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:28:09.037 05:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:28:09.037 05:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:28:09.037 05:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:09.037 05:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:28:09.293 05:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:09.293 05:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:28:09.293 05:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:28:09.293 05:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:09.293 05:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:09.293 05:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:28:09.293 05:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:09.293 05:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:28:09.550 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:09.550 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:28:09.550 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:09.550 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:28:09.806 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:09.806 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:28:09.806 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:09.806 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:28:10.063 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:10.063 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 01:28:10.063 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:28:10.320 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 01:28:10.320 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 01:28:11.685 05:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 01:28:11.685 05:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:28:11.685 05:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:11.685 05:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:28:11.685 05:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:11.685 05:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:28:11.685 05:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:11.685 05:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:28:11.942 05:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:28:11.942 05:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:28:11.942 05:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:28:11.942 05:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:11.942 05:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:11.942 05:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:28:11.942 05:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:28:11.942 05:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:12.199 05:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:12.199 05:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:28:12.199 05:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:28:12.199 05:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:12.456 05:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:12.457 05:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:28:12.457 05:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:12.457 05:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:28:12.713 05:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:12.713 05:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 01:28:12.713 05:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:28:12.975 05:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 01:28:12.975 05:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 01:28:14.345 05:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 01:28:14.345 05:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:28:14.345 05:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:14.345 05:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:28:14.345 05:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:14.345 05:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:28:14.345 05:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:14.345 05:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:28:14.603 05:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:28:14.603 05:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:28:14.603 05:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:14.603 05:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:28:14.603 05:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:14.603 05:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:28:14.603 05:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:28:14.603 05:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:14.866 05:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:14.866 05:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:28:14.866 05:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:14.866 05:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:28:15.124 05:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:15.124 05:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:28:15.124 05:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:28:15.124 05:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:15.382 05:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:28:15.383 05:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 01:28:15.383 05:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:28:15.640 05:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 01:28:15.640 05:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 01:28:17.014 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 01:28:17.014 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:28:17.014 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:17.014 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:28:17.014 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:28:17.014 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:28:17.014 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:28:17.014 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:17.272 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:28:17.272 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:28:17.272 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:17.272 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:28:17.272 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:17.272 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:28:17.272 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:17.272 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:28:17.528 05:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:17.528 05:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 01:28:17.528 05:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:28:17.528 05:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:17.785 05:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:28:17.785 05:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:28:17.785 05:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:28:17.785 05:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:18.042 05:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:28:18.042 05:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 01:28:18.042 05:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:28:18.042 05:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:28:18.299 05:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 01:28:19.667 05:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 01:28:19.667 05:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:28:19.667 05:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:19.667 05:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:28:19.667 05:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:28:19.667 05:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:28:19.667 05:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:28:19.667 05:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:19.924 05:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:19.924 05:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:28:19.924 05:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:19.924 05:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:28:19.924 05:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:19.924 05:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:28:19.924 05:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:19.924 05:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:28:20.180 05:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:20.180 05:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 01:28:20.180 05:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:20.180 05:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:28:20.437 05:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:28:20.437 05:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:28:20.437 05:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:20.437 05:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:28:20.692 05:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:20.692 05:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 01:28:20.949 05:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 01:28:20.949 05:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 01:28:20.949 05:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:28:21.205 05:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 01:28:22.136 05:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 01:28:22.136 05:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:28:22.136 05:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:22.136 05:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:28:22.393 05:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:22.393 05:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:28:22.394 05:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:22.394 05:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:28:22.651 05:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:22.651 05:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:28:22.651 05:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:22.651 05:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:28:22.909 05:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:22.909 05:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:28:22.909 05:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:28:22.909 05:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:23.167 05:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:23.167 05:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:28:23.167 05:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:23.167 05:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:28:23.167 05:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:23.167 05:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:28:23.167 05:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:23.167 05:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:28:23.424 05:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:23.424 05:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 01:28:23.424 05:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:28:23.684 05:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:28:23.941 05:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 01:28:24.876 05:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 01:28:24.876 05:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:28:24.876 05:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:24.876 05:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:28:25.198 05:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:28:25.198 05:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:28:25.198 05:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:25.198 05:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:28:25.462 05:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:25.462 05:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:28:25.462 05:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:25.462 05:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:28:25.462 05:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:25.462 05:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:28:25.462 05:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:25.462 05:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:28:25.720 05:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:25.720 05:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:28:25.720 05:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:28:25.720 05:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:25.978 05:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:25.978 05:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:28:25.978 05:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:28:25.978 05:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:26.236 05:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:26.236 05:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 01:28:26.236 05:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:28:26.494 05:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 01:28:26.494 05:23:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 01:28:27.866 05:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 01:28:27.866 05:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:28:27.866 05:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:27.866 05:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:28:27.866 05:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:27.866 05:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:28:27.866 05:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:27.866 05:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:28:27.866 05:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:27.866 05:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:28:27.866 05:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:28:27.866 05:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:28.123 05:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:28.123 05:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:28:28.123 05:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:28.123 05:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:28:28.380 05:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:28.380 05:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:28:28.380 05:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:28:28.380 05:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:28.636 05:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:28.636 05:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:28:28.636 05:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:28.636 05:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:28:28.892 05:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:28.892 05:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 01:28:28.892 05:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:28:28.892 05:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 01:28:29.148 05:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 01:28:30.079 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 01:28:30.080 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:28:30.080 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:30.080 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:28:30.338 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:30.338 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:28:30.338 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:30.338 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:28:30.595 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:28:30.595 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:28:30.595 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:28:30.595 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:30.595 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:30.595 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:28:30.595 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:30.595 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:28:30.851 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:30.851 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:28:30.851 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:30.851 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:28:31.108 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:28:31.108 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:28:31.108 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:28:31.108 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:28:31.108 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:28:31.108 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 89080 01:28:31.108 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 89080 ']' 01:28:31.108 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 89080 01:28:31.108 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 01:28:31.108 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:28:31.108 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89080 01:28:31.390 killing process with pid 89080 01:28:31.390 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:28:31.390 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:28:31.390 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89080' 01:28:31.390 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 89080 01:28:31.390 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 89080 01:28:31.390 { 01:28:31.390 "results": [ 01:28:31.390 { 01:28:31.390 "job": "Nvme0n1", 01:28:31.390 "core_mask": "0x4", 01:28:31.390 "workload": "verify", 01:28:31.390 "status": "terminated", 01:28:31.390 "verify_range": { 01:28:31.390 "start": 0, 01:28:31.390 "length": 16384 01:28:31.390 }, 01:28:31.390 "queue_depth": 128, 01:28:31.390 "io_size": 4096, 01:28:31.390 "runtime": 28.512795, 01:28:31.390 "iops": 12776.509633657452, 01:28:31.390 "mibps": 49.90824075647442, 01:28:31.390 "io_failed": 0, 01:28:31.390 "io_timeout": 0, 01:28:31.390 "avg_latency_us": 9998.284426602184, 01:28:31.390 "min_latency_us": 286.72, 01:28:31.390 "max_latency_us": 3071521.083076923 01:28:31.390 } 01:28:31.390 ], 01:28:31.390 "core_count": 1 01:28:31.390 } 01:28:31.390 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 89080 01:28:31.390 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:28:31.390 [2024-12-09 05:22:52.422363] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:28:31.390 [2024-12-09 05:22:52.422428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89080 ] 01:28:31.390 [2024-12-09 05:22:52.557451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:28:31.390 [2024-12-09 05:22:52.592202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:28:31.390 Running I/O for 90 seconds... 01:28:31.390 12817.00 IOPS, 50.07 MiB/s [2024-12-09T05:23:23.007Z] 13046.00 IOPS, 50.96 MiB/s [2024-12-09T05:23:23.007Z] 13097.33 IOPS, 51.16 MiB/s [2024-12-09T05:23:23.007Z] 13129.75 IOPS, 51.29 MiB/s [2024-12-09T05:23:23.007Z] 13119.80 IOPS, 51.25 MiB/s [2024-12-09T05:23:23.007Z] 13147.50 IOPS, 51.36 MiB/s [2024-12-09T05:23:23.007Z] 13171.00 IOPS, 51.45 MiB/s [2024-12-09T05:23:23.007Z] 13202.12 IOPS, 51.57 MiB/s [2024-12-09T05:23:23.007Z] 13198.44 IOPS, 51.56 MiB/s [2024-12-09T05:23:23.007Z] 13186.80 IOPS, 51.51 MiB/s [2024-12-09T05:23:23.007Z] 13199.64 IOPS, 51.56 MiB/s [2024-12-09T05:23:23.007Z] 13207.67 IOPS, 51.59 MiB/s [2024-12-09T05:23:23.007Z] [2024-12-09 05:23:07.009052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.390 [2024-12-09 05:23:07.009093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:28:31.390 [2024-12-09 05:23:07.009112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.390 [2024-12-09 05:23:07.009120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:28:31.390 [2024-12-09 05:23:07.009132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.390 [2024-12-09 05:23:07.009139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:28:31.390 [2024-12-09 05:23:07.009151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.390 [2024-12-09 05:23:07.009158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:28:31.390 [2024-12-09 05:23:07.009170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.390 [2024-12-09 05:23:07.009176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:28:31.390 [2024-12-09 05:23:07.009188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.390 [2024-12-09 05:23:07.009195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:28:31.390 [2024-12-09 05:23:07.009207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.390 [2024-12-09 05:23:07.009213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:28:31.390 [2024-12-09 05:23:07.009225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.390 [2024-12-09 05:23:07.009232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:28:31.390 [2024-12-09 05:23:07.009244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.390 [2024-12-09 05:23:07.009250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:28:31.390 [2024-12-09 05:23:07.009262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.390 [2024-12-09 05:23:07.009286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:28:31.390 [2024-12-09 05:23:07.009298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.390 [2024-12-09 05:23:07.009304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:28:31.390 [2024-12-09 05:23:07.009316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.390 [2024-12-09 05:23:07.009323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:28:31.390 [2024-12-09 05:23:07.009334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.390 [2024-12-09 05:23:07.009341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:28:31.390 [2024-12-09 05:23:07.009353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.390 [2024-12-09 05:23:07.009359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:28:31.390 [2024-12-09 05:23:07.009371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.390 [2024-12-09 05:23:07.009379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:28:31.390 [2024-12-09 05:23:07.009390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.390 [2024-12-09 05:23:07.009397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:28:31.390 [2024-12-09 05:23:07.009410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.390 [2024-12-09 05:23:07.009417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:31.390 [2024-12-09 05:23:07.009430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.390 [2024-12-09 05:23:07.009437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:28:31.390 [2024-12-09 05:23:07.009449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.390 [2024-12-09 05:23:07.009456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:28:31.390 [2024-12-09 05:23:07.009468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.390 [2024-12-09 05:23:07.009475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:28:31.390 [2024-12-09 05:23:07.009487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.390 [2024-12-09 05:23:07.009494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:28:31.390 [2024-12-09 05:23:07.009506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.390 [2024-12-09 05:23:07.009517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.009536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.009555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.009584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.009602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.009621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.009640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.009659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.009678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.009697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.009717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.391 [2024-12-09 05:23:07.009736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.391 [2024-12-09 05:23:07.009755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.391 [2024-12-09 05:23:07.009796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.391 [2024-12-09 05:23:07.009815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.391 [2024-12-09 05:23:07.009834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.391 [2024-12-09 05:23:07.009852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.391 [2024-12-09 05:23:07.009872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.009890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.009909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.009929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.009947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.009966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.009985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.009997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.010004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.010020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.010027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.010039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.391 [2024-12-09 05:23:07.010046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.010058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.391 [2024-12-09 05:23:07.010065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.010077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.010084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.010096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.010103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.010115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.010122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.010134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.010141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.010153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.010159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.010171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.010178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.010190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.010197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.010209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.010216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.010738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.010755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.010769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.010784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.010796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.391 [2024-12-09 05:23:07.010803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:28:31.391 [2024-12-09 05:23:07.010815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.010821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.010833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.010840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.010852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.010859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.010876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.010883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.010895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.010902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.010914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.010921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.010933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.010939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.010951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.010958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.010970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.010977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.010989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.010996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.392 [2024-12-09 05:23:07.011553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:28:31.392 [2024-12-09 05:23:07.011573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.011580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.011592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.011599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.011610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.011617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.011629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.011636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.011648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.011655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.011667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.011674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.011686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.011693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.011705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.011712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.011724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.011734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.011746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.011753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.011765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.011772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.011784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.011791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.011803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.011810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.011823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.393 [2024-12-09 05:23:07.011830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.011842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.393 [2024-12-09 05:23:07.011849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.011860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.393 [2024-12-09 05:23:07.011868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.011879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.393 [2024-12-09 05:23:07.011886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.011899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.393 [2024-12-09 05:23:07.011906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.011919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.393 [2024-12-09 05:23:07.011926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.012421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.393 [2024-12-09 05:23:07.012434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.012448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.393 [2024-12-09 05:23:07.012461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.012474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.393 [2024-12-09 05:23:07.012481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.012493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.393 [2024-12-09 05:23:07.012499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.012512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.393 [2024-12-09 05:23:07.012518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.012530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.393 [2024-12-09 05:23:07.012537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.012549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.393 [2024-12-09 05:23:07.012556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.012578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.393 [2024-12-09 05:23:07.012585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.012597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.393 [2024-12-09 05:23:07.012603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.012616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.012623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.012637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.012644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.012656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.012663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.012675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.012682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.012694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.012700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.012718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.012725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.012737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.012744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.012756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.012763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.012775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.012781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.012793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.012800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.012812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.393 [2024-12-09 05:23:07.012818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:28:31.393 [2024-12-09 05:23:07.012830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.012837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.012849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.012856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.012867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.012874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.012886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.012893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.012905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.012911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.012923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.012930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.012947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.012954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.012966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.012973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.012985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.012992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.013010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.013030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.013049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.013067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.013086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.013105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.013123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.013142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.013160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.013181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.013201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.013219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.013238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.394 [2024-12-09 05:23:07.013258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.394 [2024-12-09 05:23:07.013277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.394 [2024-12-09 05:23:07.013296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.394 [2024-12-09 05:23:07.013314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.394 [2024-12-09 05:23:07.013335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.394 [2024-12-09 05:23:07.013354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.394 [2024-12-09 05:23:07.013372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.013391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.013413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.013432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.013450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:28:31.394 [2024-12-09 05:23:07.013462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.394 [2024-12-09 05:23:07.013469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.013481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.013487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.013499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.013506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.013518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.013524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.013536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.395 [2024-12-09 05:23:07.013543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.013556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.395 [2024-12-09 05:23:07.013569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.013581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.013588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.013600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.013607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.013619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.013626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.013639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.013646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.013662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.013668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.013680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.013687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.014192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.014205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.014219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.014226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.014237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.014244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.014256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.014263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.014275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.014282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.014294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.014300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.014312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.014319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.014331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.014338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.014350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.014356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.014368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.014375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.014395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.014402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.014414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.014421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.014433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.014439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.014451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.014458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.014470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.014477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.014489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.014495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.014507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.014514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.014526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.014533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.014545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.014551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.014573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.014581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.021586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.021618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.021636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.021647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.021663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.021683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.021700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.021709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.021725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.021734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.021751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.021760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.021776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.021785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:28:31.395 [2024-12-09 05:23:07.021801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.395 [2024-12-09 05:23:07.021810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.021827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.021836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.021852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.021861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.021877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.021887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.021903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.021912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.021929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.021938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.021954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.021963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.021979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.021995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.022021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.022045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.022071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.022096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.022121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.022146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.022171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.022196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.022221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.022246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.022271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.022296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.022326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.022351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.022375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.022400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.022425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.022450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.022475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.022499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.022524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.396 [2024-12-09 05:23:07.022550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.396 [2024-12-09 05:23:07.022586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.396 [2024-12-09 05:23:07.022611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.396 [2024-12-09 05:23:07.022642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.396 [2024-12-09 05:23:07.022667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.022684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.396 [2024-12-09 05:23:07.022693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.023441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.396 [2024-12-09 05:23:07.023461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.023481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.396 [2024-12-09 05:23:07.023490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.023506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.396 [2024-12-09 05:23:07.023516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.023532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.396 [2024-12-09 05:23:07.023541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:28:31.396 [2024-12-09 05:23:07.023557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.396 [2024-12-09 05:23:07.023579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.023595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.397 [2024-12-09 05:23:07.023605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.023621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.397 [2024-12-09 05:23:07.023630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.023646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.397 [2024-12-09 05:23:07.023655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.023671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.397 [2024-12-09 05:23:07.023680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.023696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.397 [2024-12-09 05:23:07.023714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.023730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.023739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.023756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.023765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.023781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.023790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.023806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.023815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.023831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.023840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.023856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.023865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.023881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.023890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.023906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.023915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.023931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.023940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.023956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.023965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.023982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.023990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.024020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.024045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.024070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.024095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.024120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.024145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.024171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.024196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.024221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.024246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.024282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.024307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.024333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.024362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.024388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.024413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.024438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.024464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.024489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.024514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.024539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.397 [2024-12-09 05:23:07.024598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:28:31.397 [2024-12-09 05:23:07.024615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.397 [2024-12-09 05:23:07.024625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.024641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.398 [2024-12-09 05:23:07.024650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.024666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.398 [2024-12-09 05:23:07.024675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.024696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.398 [2024-12-09 05:23:07.024705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.024722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.398 [2024-12-09 05:23:07.024731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.024747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.398 [2024-12-09 05:23:07.024757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.024773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.398 [2024-12-09 05:23:07.024782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.024798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.024807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.024823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.024832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.024848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.024857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.024873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.024882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.024898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.024907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.024923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.024932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.024948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.024957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.024974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.024983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.025000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.398 [2024-12-09 05:23:07.025022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.025039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.398 [2024-12-09 05:23:07.025048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.025064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.025073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.025090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.025099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.025115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.025123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.025139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.025149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.025166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.025175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.025858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.025876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.025894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.025903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.025920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.025929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.025945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.025954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.025970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.025979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.025995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.026011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.026028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.026037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.026053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.026063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.026079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.026088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.026104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.026113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.026129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.026138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.026154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.026163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.026179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.026188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.026204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.026213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.026229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.026238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.026254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.026263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.026279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.398 [2024-12-09 05:23:07.026288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:28:31.398 [2024-12-09 05:23:07.026304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.026981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.026991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.027007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.027016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.027032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.027041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.027057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.027067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.027083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.027092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.027108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.027117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.027133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.027142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.027158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.027167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.027183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.399 [2024-12-09 05:23:07.027192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:28:31.399 [2024-12-09 05:23:07.027208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.027217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.027233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.027242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.027258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.027271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.027287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.027296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.027313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.027322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.027338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.027347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.027363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.400 [2024-12-09 05:23:07.027372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.027389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.400 [2024-12-09 05:23:07.027398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.027414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.400 [2024-12-09 05:23:07.027423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.027439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.400 [2024-12-09 05:23:07.027448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.400 [2024-12-09 05:23:07.028157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.400 [2024-12-09 05:23:07.028184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.400 [2024-12-09 05:23:07.028209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.400 [2024-12-09 05:23:07.028235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.400 [2024-12-09 05:23:07.028270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.400 [2024-12-09 05:23:07.028303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.400 [2024-12-09 05:23:07.028329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.400 [2024-12-09 05:23:07.028354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.400 [2024-12-09 05:23:07.028380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.400 [2024-12-09 05:23:07.028405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.400 [2024-12-09 05:23:07.028430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.028456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.028481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.028506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.028531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.028556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.028593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.028623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.028648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.028673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.028698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.028723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.028749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.028774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.028799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.028824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.028849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.028874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.400 [2024-12-09 05:23:07.028899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:31.400 [2024-12-09 05:23:07.028915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.028930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.028947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.028956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.028972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.028981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.028997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.401 [2024-12-09 05:23:07.029311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.401 [2024-12-09 05:23:07.029337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.401 [2024-12-09 05:23:07.029362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.401 [2024-12-09 05:23:07.029387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.401 [2024-12-09 05:23:07.029413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.401 [2024-12-09 05:23:07.029438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.401 [2024-12-09 05:23:07.029463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.401 [2024-12-09 05:23:07.029703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.401 [2024-12-09 05:23:07.029729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.029821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.029829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.030491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.030506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.030531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.030540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.030557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.401 [2024-12-09 05:23:07.030577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:28:31.401 [2024-12-09 05:23:07.030593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.030602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.030619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.030628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.030644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.030653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.030669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.030678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.030694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.030703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.030719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.030728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.030744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.030753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.030769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.030778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.030794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.030803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.030819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.030829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.030845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.030859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.030875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.030884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.030901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.030909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.030925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.030934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.030951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.030960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.030976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.030985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.402 [2024-12-09 05:23:07.031576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:28:31.402 [2024-12-09 05:23:07.031593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.403 [2024-12-09 05:23:07.031601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.031617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.403 [2024-12-09 05:23:07.031626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.031642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.403 [2024-12-09 05:23:07.031651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.031667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.403 [2024-12-09 05:23:07.031676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.031692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.403 [2024-12-09 05:23:07.031701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.031718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.403 [2024-12-09 05:23:07.031727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.031743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.403 [2024-12-09 05:23:07.031752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.031768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.403 [2024-12-09 05:23:07.031777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.031798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.403 [2024-12-09 05:23:07.031806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.031823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.403 [2024-12-09 05:23:07.031832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.031848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.403 [2024-12-09 05:23:07.031857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.031873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.403 [2024-12-09 05:23:07.031882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.031898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.403 [2024-12-09 05:23:07.031907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.031923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.403 [2024-12-09 05:23:07.031931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.031948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.403 [2024-12-09 05:23:07.031957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.031973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.403 [2024-12-09 05:23:07.031982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.031998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.403 [2024-12-09 05:23:07.032007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.032024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.403 [2024-12-09 05:23:07.032033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.032049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.403 [2024-12-09 05:23:07.032057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.032074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.403 [2024-12-09 05:23:07.032083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.032800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.403 [2024-12-09 05:23:07.032827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.032844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.403 [2024-12-09 05:23:07.032853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.032869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.403 [2024-12-09 05:23:07.032878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.032894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.403 [2024-12-09 05:23:07.032902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.032918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.403 [2024-12-09 05:23:07.032927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.032942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.403 [2024-12-09 05:23:07.032951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.032967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.403 [2024-12-09 05:23:07.032975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.032991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.403 [2024-12-09 05:23:07.032999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.033015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.403 [2024-12-09 05:23:07.033024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.033039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.403 [2024-12-09 05:23:07.033048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.033064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.403 [2024-12-09 05:23:07.033072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.033088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.403 [2024-12-09 05:23:07.033097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.033113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.403 [2024-12-09 05:23:07.033126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.033142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.403 [2024-12-09 05:23:07.033150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.033166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.403 [2024-12-09 05:23:07.033174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:28:31.403 [2024-12-09 05:23:07.033190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.033920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.404 [2024-12-09 05:23:07.033944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.404 [2024-12-09 05:23:07.033969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.033984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.404 [2024-12-09 05:23:07.033993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.034009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.404 [2024-12-09 05:23:07.034018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.034034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.404 [2024-12-09 05:23:07.034046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.034062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.404 [2024-12-09 05:23:07.034071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.034087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.404 [2024-12-09 05:23:07.034095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.034111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.034120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.034135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.404 [2024-12-09 05:23:07.034144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:28:31.404 [2024-12-09 05:23:07.034160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.034168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.034184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.034193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.034208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.034217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.034232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.034241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.034257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.034266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.034281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.034290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.034305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.405 [2024-12-09 05:23:07.034314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.034330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.405 [2024-12-09 05:23:07.034342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.034358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.034367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.034382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.034391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.034407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.034416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:28:31.405 [2024-12-09 05:23:07.035774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.405 [2024-12-09 05:23:07.035783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.035798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.035808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.035823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.035832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.035847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.035856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.035872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.035880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.035896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.035904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.035920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.035933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.035948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.035957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.035973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.035981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.035997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.406 [2024-12-09 05:23:07.036609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.406 [2024-12-09 05:23:07.036633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.036649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.406 [2024-12-09 05:23:07.036658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.037319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.406 [2024-12-09 05:23:07.037337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.037354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.406 [2024-12-09 05:23:07.037363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.037379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.406 [2024-12-09 05:23:07.037388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.037404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.406 [2024-12-09 05:23:07.037412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:28:31.406 [2024-12-09 05:23:07.037428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.406 [2024-12-09 05:23:07.037437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.037453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.407 [2024-12-09 05:23:07.037462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.037477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.407 [2024-12-09 05:23:07.037486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.037502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.407 [2024-12-09 05:23:07.037510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.037534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.407 [2024-12-09 05:23:07.037543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.037558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.407 [2024-12-09 05:23:07.037580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.037595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.407 [2024-12-09 05:23:07.037604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.037620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.407 [2024-12-09 05:23:07.037629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.037645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.407 [2024-12-09 05:23:07.037654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.037669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.037678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.037694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.037702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.037718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.037727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.037742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.037751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.037766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.037775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.037790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.037799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.037815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.037823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.037839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.037852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.037868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.037877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.037893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.037901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.037917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.037925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.037941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.037950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.037965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.037974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.037990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.037999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.038014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.038023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.038038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.038047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.038062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.038071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.038087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.038095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.038111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.038120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.038135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.038148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.038163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.038172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.038187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.038196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.038212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.038221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.038236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.038245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.038261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.038269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:28:31.407 [2024-12-09 05:23:07.038285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.407 [2024-12-09 05:23:07.038293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.038318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.038342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.038366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.038391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.038415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.038439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.038468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.408 [2024-12-09 05:23:07.038492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.408 [2024-12-09 05:23:07.038517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.408 [2024-12-09 05:23:07.038541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.408 [2024-12-09 05:23:07.038573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.408 [2024-12-09 05:23:07.038598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.408 [2024-12-09 05:23:07.038622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.408 [2024-12-09 05:23:07.038647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.038672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.038696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.038720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.038745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.038773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.038798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.038822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.038846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.408 [2024-12-09 05:23:07.038870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.408 [2024-12-09 05:23:07.038895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.038919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.038935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.038943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.039584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.039601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.039618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.039627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.039642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.039651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.039666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.039675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.039697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.039706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.039722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.039730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.039746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.039754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.039770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.039779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.039795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.039803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.039819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.039828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.039843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.039852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.039868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.039876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:28:31.408 [2024-12-09 05:23:07.039892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.408 [2024-12-09 05:23:07.039901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.039916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.039925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.039941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.039949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.039965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.039974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.039989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:28:31.409 [2024-12-09 05:23:07.040867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.409 [2024-12-09 05:23:07.040876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.040891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.040900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.040916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.040928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.040944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.040953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.040968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.040977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.040993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.041001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.041017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.041026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.041042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.041051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.041066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.041075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.041091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.041099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.041115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.410 [2024-12-09 05:23:07.041124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.041800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.410 [2024-12-09 05:23:07.041817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.041835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.410 [2024-12-09 05:23:07.041844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.041860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.410 [2024-12-09 05:23:07.041869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.041885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.410 [2024-12-09 05:23:07.041901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.041917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.410 [2024-12-09 05:23:07.041926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.041941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.410 [2024-12-09 05:23:07.041950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.041966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.410 [2024-12-09 05:23:07.041974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.041990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.410 [2024-12-09 05:23:07.041999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.042014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.410 [2024-12-09 05:23:07.042023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.042038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.410 [2024-12-09 05:23:07.042047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.042063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.410 [2024-12-09 05:23:07.042072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.042087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.410 [2024-12-09 05:23:07.042096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.042112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.410 [2024-12-09 05:23:07.042121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.042136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.410 [2024-12-09 05:23:07.042145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.042160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.042169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.042185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.042199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.042215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.042224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.042239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.042248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.042263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.042272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.042287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.042296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.042311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.042320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.042335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.042344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.042360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.042368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.042384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.042392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.042408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.042416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.042432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.042441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.042456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.042465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.042481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.042489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:28:31.410 [2024-12-09 05:23:07.042511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.410 [2024-12-09 05:23:07.042520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.042535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.042544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.042569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.042579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.042594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.042603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.042618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.042627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.042643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.042651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.042667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.042676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.042691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.042700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.042715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.042724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.042739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.042748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.042763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.042772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.042787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.042796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.042816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.042825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.042841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.042850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.042865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.042874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.042890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.042898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.042914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.042922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.042938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.042947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.042962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.042970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.042986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.411 [2024-12-09 05:23:07.042994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.043010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.411 [2024-12-09 05:23:07.043019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.043034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.411 [2024-12-09 05:23:07.043043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.043058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.411 [2024-12-09 05:23:07.043067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.043083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.411 [2024-12-09 05:23:07.043092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.043107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.411 [2024-12-09 05:23:07.043120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.043136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.411 [2024-12-09 05:23:07.043145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.043160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.043169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.043185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.043193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.043209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.043217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.043233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.043242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.043258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.043267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.043282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.043291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.043306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.043315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.043331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.043339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.043355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.411 [2024-12-09 05:23:07.043364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.043379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.411 [2024-12-09 05:23:07.043388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.043404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.043416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.044051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.044066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.044084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.411 [2024-12-09 05:23:07.044093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:28:31.411 [2024-12-09 05:23:07.044109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.412 [2024-12-09 05:23:07.044954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:28:31.412 [2024-12-09 05:23:07.044966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.044972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.044984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.044991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.413 [2024-12-09 05:23:07.045427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.413 [2024-12-09 05:23:07.045970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.045984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.413 [2024-12-09 05:23:07.045991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.046003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.413 [2024-12-09 05:23:07.046010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.046022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.413 [2024-12-09 05:23:07.046029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.046041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.413 [2024-12-09 05:23:07.046048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.046060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.413 [2024-12-09 05:23:07.046067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.046079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.413 [2024-12-09 05:23:07.046085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.046097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.413 [2024-12-09 05:23:07.046104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.046116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.413 [2024-12-09 05:23:07.046122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.046134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.413 [2024-12-09 05:23:07.046141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.046154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.413 [2024-12-09 05:23:07.046161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.046178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.413 [2024-12-09 05:23:07.046185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.046198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.413 [2024-12-09 05:23:07.046204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:28:31.413 [2024-12-09 05:23:07.046216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.413 [2024-12-09 05:23:07.046223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.414 [2024-12-09 05:23:07.046242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.414 [2024-12-09 05:23:07.046875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.414 [2024-12-09 05:23:07.046897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:28:31.414 [2024-12-09 05:23:07.046908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.414 [2024-12-09 05:23:07.046915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.046927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.415 [2024-12-09 05:23:07.046934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.046946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.415 [2024-12-09 05:23:07.046952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.046964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.415 [2024-12-09 05:23:07.046971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.046983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.415 [2024-12-09 05:23:07.046989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.415 [2024-12-09 05:23:07.047008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.415 [2024-12-09 05:23:07.047178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.415 [2024-12-09 05:23:07.047197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.047991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.047998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.048010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.048016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.048028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.048035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.048047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.048054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.048065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.048078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.048091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.048097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.048109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.415 [2024-12-09 05:23:07.048116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:28:31.415 [2024-12-09 05:23:07.048128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.416 [2024-12-09 05:23:07.048778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:28:31.416 [2024-12-09 05:23:07.048789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.048800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.048812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.048819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.048831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.048838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.048850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.048856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.048868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.048875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.048887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.048894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.049405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.417 [2024-12-09 05:23:07.049425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.417 [2024-12-09 05:23:07.049444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.417 [2024-12-09 05:23:07.049463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.417 [2024-12-09 05:23:07.049481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.417 [2024-12-09 05:23:07.049500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.417 [2024-12-09 05:23:07.049524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.417 [2024-12-09 05:23:07.049543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.417 [2024-12-09 05:23:07.049571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.417 [2024-12-09 05:23:07.049590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.417 [2024-12-09 05:23:07.049609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.417 [2024-12-09 05:23:07.049628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.417 [2024-12-09 05:23:07.049647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.417 [2024-12-09 05:23:07.049666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.417 [2024-12-09 05:23:07.049684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.417 [2024-12-09 05:23:07.049703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.049722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.049740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.049759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.049783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.049802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.049820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.049839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.049857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.049876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.049894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.049913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.049933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.049952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.049971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.049983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.049990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.050004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.050011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.050023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.417 [2024-12-09 05:23:07.050030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:28:31.417 [2024-12-09 05:23:07.050042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.418 [2024-12-09 05:23:07.050355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.418 [2024-12-09 05:23:07.050374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.418 [2024-12-09 05:23:07.050393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.418 [2024-12-09 05:23:07.050412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.418 [2024-12-09 05:23:07.050431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.418 [2024-12-09 05:23:07.050449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.418 [2024-12-09 05:23:07.050471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.050628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.050640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.418 [2024-12-09 05:23:07.050646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.051143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.418 [2024-12-09 05:23:07.051156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.051170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.051177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.051189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.051196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.051214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.051221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.051233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.051239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.051251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.051258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:28:31.418 [2024-12-09 05:23:07.051270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.418 [2024-12-09 05:23:07.051277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.051983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.051995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.419 [2024-12-09 05:23:07.052002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:28:31.419 [2024-12-09 05:23:07.052014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.052020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.052032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.052038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.052050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.052057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.052069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.052076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.052087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.052094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.052106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.052113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.052124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.052131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.052146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.052153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.052165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.052172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.052183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.052190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.052202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.052209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.052221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.052227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.052241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.052254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.055975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.055999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.056013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.056021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.056033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.056040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.056052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.056059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.056781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.056801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.056820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.056831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.056860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.420 [2024-12-09 05:23:07.056870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.056886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.420 [2024-12-09 05:23:07.056895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.056911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.420 [2024-12-09 05:23:07.056920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.056935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.420 [2024-12-09 05:23:07.056944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.056960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.420 [2024-12-09 05:23:07.056969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.056985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.420 [2024-12-09 05:23:07.056994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.057010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.420 [2024-12-09 05:23:07.057018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.057034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.420 [2024-12-09 05:23:07.057043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.057059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.420 [2024-12-09 05:23:07.057067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.057084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.420 [2024-12-09 05:23:07.057092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.057108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.420 [2024-12-09 05:23:07.057117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.057132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.420 [2024-12-09 05:23:07.057141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.057156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.420 [2024-12-09 05:23:07.057171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.057187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.420 [2024-12-09 05:23:07.057195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.057211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.420 [2024-12-09 05:23:07.057219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.057235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.057243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.057259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.057267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.057282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.057291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.057306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.057315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.057330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.420 [2024-12-09 05:23:07.057339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:28:31.420 [2024-12-09 05:23:07.057354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.057979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.057988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.058004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.058013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.058029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.421 [2024-12-09 05:23:07.058037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.058053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.421 [2024-12-09 05:23:07.058061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.058081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.421 [2024-12-09 05:23:07.058090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.058106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.421 [2024-12-09 05:23:07.058114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.058130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.421 [2024-12-09 05:23:07.058138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.058154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.421 [2024-12-09 05:23:07.058163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.058178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.421 [2024-12-09 05:23:07.058187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.058202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.421 [2024-12-09 05:23:07.058211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:28:31.421 [2024-12-09 05:23:07.058226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.058235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.058251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.058259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.058275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.058284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.058299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.058307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.058323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.058331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.058347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.058355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.058371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.058383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.058399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.058408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.058424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.422 [2024-12-09 05:23:07.058432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.058448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.422 [2024-12-09 05:23:07.058456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.058472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.058480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.058496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.058504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.058520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.058528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.058544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.058553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.058577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.058586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.058601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.058610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.058626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.058635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.059374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.059408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.059433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.059457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.059482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.059506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.059530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.059555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.059591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.059615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.059640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.059664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.059688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.059712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.059742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.059766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.059790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.059814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.059839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.059863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.059887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.422 [2024-12-09 05:23:07.059911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:28:31.422 [2024-12-09 05:23:07.059927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.059936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.059951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.059960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.059976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.059984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.060686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.060695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.061280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.061298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.061316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.061325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.061341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.423 [2024-12-09 05:23:07.061350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.061366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.423 [2024-12-09 05:23:07.061375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.061390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.423 [2024-12-09 05:23:07.061399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.061415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.423 [2024-12-09 05:23:07.061424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.061439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.423 [2024-12-09 05:23:07.061448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.061464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.423 [2024-12-09 05:23:07.061472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:28:31.423 [2024-12-09 05:23:07.061488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.424 [2024-12-09 05:23:07.061496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.061512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.424 [2024-12-09 05:23:07.061521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.061544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.424 [2024-12-09 05:23:07.061554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.061583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.424 [2024-12-09 05:23:07.061592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.061608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.424 [2024-12-09 05:23:07.061617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.061633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.424 [2024-12-09 05:23:07.061641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.061657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.424 [2024-12-09 05:23:07.061666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.061681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.424 [2024-12-09 05:23:07.061690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.061706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.424 [2024-12-09 05:23:07.061714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.061729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.424 [2024-12-09 05:23:07.061738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.061754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.061763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.061778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.061787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.061802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.061811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.061827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.061835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.061855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.061864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.061880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.061888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.061904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.061913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.061928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.061937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.061953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.061961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.061977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.061986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.062001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.062009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.062025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.062034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.062049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.062058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.062073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.062082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.062097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.062106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.062122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.062131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.062146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.062160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.062175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.062184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.062199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.062208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.062223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.062232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.062247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.062256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.062272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.062281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.062296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.062305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.062320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.062329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.062345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.062353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.062369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.062378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.062393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.062402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.062418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.424 [2024-12-09 05:23:07.062426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:28:31.424 [2024-12-09 05:23:07.062441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.062454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.062470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.062479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.062494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.062503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.062518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.062527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.062543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.062552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.062576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.425 [2024-12-09 05:23:07.062585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.062601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.425 [2024-12-09 05:23:07.062610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.062625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.425 [2024-12-09 05:23:07.062634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.062650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.425 [2024-12-09 05:23:07.062659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.062674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.425 [2024-12-09 05:23:07.062683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.062698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.425 [2024-12-09 05:23:07.062707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.062723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.425 [2024-12-09 05:23:07.062732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.062747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.062756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.062778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.062787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.062803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.062812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.062827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.062836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.062851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.062860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.062876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.062885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.062900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.062909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.062924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.062933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.062949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.425 [2024-12-09 05:23:07.062957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.062973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.425 [2024-12-09 05:23:07.062981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.062997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.063005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.063021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.063030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.063045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.063054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.063073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.063082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.063098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.063106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.063122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.063131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.063859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.063876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.063894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.063903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.063919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.063928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.063944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.063953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.063968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.063977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.063993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.064001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.064017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.425 [2024-12-09 05:23:07.064026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:28:31.425 [2024-12-09 05:23:07.064041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.064981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.064996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.426 [2024-12-09 05:23:07.065005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:28:31.426 [2024-12-09 05:23:07.065024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.065033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.065049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.065057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.065073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.065082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.065097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.065106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.065121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.065130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.065146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.065154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.065170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.065179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.065771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.065789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.065806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.065815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.065831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.065840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.065856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.065865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.065880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.427 [2024-12-09 05:23:07.065889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.065904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.427 [2024-12-09 05:23:07.065923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.065943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.427 [2024-12-09 05:23:07.065952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.065967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.427 [2024-12-09 05:23:07.065976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.065991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.427 [2024-12-09 05:23:07.066000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.427 [2024-12-09 05:23:07.066025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.427 [2024-12-09 05:23:07.066049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.427 [2024-12-09 05:23:07.066074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.427 [2024-12-09 05:23:07.066098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.427 [2024-12-09 05:23:07.066123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.427 [2024-12-09 05:23:07.066147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.427 [2024-12-09 05:23:07.066171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.427 [2024-12-09 05:23:07.066195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.427 [2024-12-09 05:23:07.066224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.427 [2024-12-09 05:23:07.066249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.066274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.066298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.066322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.066346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.066370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.066394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.066418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.066442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.066467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.066491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.066515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.066544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:28:31.427 [2024-12-09 05:23:07.066569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.427 [2024-12-09 05:23:07.066579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.066595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.066603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.066619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.066627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.066642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.066651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.066666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.066675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.066691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.066699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.066715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.066724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.066739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.066748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.066763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.066772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.066787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.066796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.066811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.066820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.066840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.066849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.066864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.066873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.066889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.066898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.066913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.066922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.066938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.066946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.066962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.066971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.066986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.066995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.067011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.067019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.067035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.067044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.067059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.067068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.067083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.428 [2024-12-09 05:23:07.067092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.067108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.428 [2024-12-09 05:23:07.067116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.067132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.428 [2024-12-09 05:23:07.067145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.067161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.428 [2024-12-09 05:23:07.067169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.067185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.428 [2024-12-09 05:23:07.067194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.067210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.428 [2024-12-09 05:23:07.067218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.067234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.428 [2024-12-09 05:23:07.067243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.067259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.067267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.067283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.067291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.067307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.067315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.067331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.067340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.067355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.067364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.067379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.067388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.067403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.067412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.067428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.067440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.067456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.428 [2024-12-09 05:23:07.067465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.067480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.428 [2024-12-09 05:23:07.067489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.067505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.067513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:28:31.428 [2024-12-09 05:23:07.067529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.428 [2024-12-09 05:23:07.067538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.067553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.067569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.067585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.067594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.067610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.067618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.067839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.067853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.067885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.067895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.067916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.067925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.067945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.067954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.067975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.067983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:28:31.429 [2024-12-09 05:23:07.068901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.429 [2024-12-09 05:23:07.068908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:07.068924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:07.068930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:07.068946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:07.068952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:07.068968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:07.068975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:07.068990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:07.068997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:07.069013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:07.069019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:07.069034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:07.069046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:07.069061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:07.069068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:07.069084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:07.069090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:07.069106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:07.069113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:07.069128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:07.069135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:07.069150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:07.069157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:07.069173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:07.069180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:07.069195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:07.069202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:07.069217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:07.069224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:07.069240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:07.069246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:07.069262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:07.069269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:07.069284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:07.069291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:07.069399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:07.069408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:28:31.430 12979.46 IOPS, 50.70 MiB/s [2024-12-09T05:23:23.047Z] 12052.36 IOPS, 47.08 MiB/s [2024-12-09T05:23:23.047Z] 11248.87 IOPS, 43.94 MiB/s [2024-12-09T05:23:23.047Z] 10688.25 IOPS, 41.75 MiB/s [2024-12-09T05:23:23.047Z] 10851.88 IOPS, 42.39 MiB/s [2024-12-09T05:23:23.047Z] 10987.67 IOPS, 42.92 MiB/s [2024-12-09T05:23:23.047Z] 11295.32 IOPS, 44.12 MiB/s [2024-12-09T05:23:23.047Z] 11669.10 IOPS, 45.58 MiB/s [2024-12-09T05:23:23.047Z] 12003.24 IOPS, 46.89 MiB/s [2024-12-09T05:23:23.047Z] 12059.59 IOPS, 47.11 MiB/s [2024-12-09T05:23:23.047Z] 12113.52 IOPS, 47.32 MiB/s [2024-12-09T05:23:23.047Z] 12190.79 IOPS, 47.62 MiB/s [2024-12-09T05:23:23.047Z] 12445.00 IOPS, 48.61 MiB/s [2024-12-09T05:23:23.047Z] 12673.81 IOPS, 49.51 MiB/s [2024-12-09T05:23:23.047Z] [2024-12-09 05:23:20.577381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:20.577429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:20.577459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:20.577468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:20.577481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:20.577488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:20.577500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:20.577507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:20.577519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:20.577526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:20.577538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:20.577545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:20.577557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:20.577573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:20.577585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:20.577592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:20.577604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:20.577611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:20.577623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:20.577630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:20.577643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:20.577668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:20.577681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.430 [2024-12-09 05:23:20.577688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:20.577701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.430 [2024-12-09 05:23:20.577708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:20.577721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.430 [2024-12-09 05:23:20.577727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:28:31.430 [2024-12-09 05:23:20.577739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.431 [2024-12-09 05:23:20.577746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.577759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.431 [2024-12-09 05:23:20.577766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.577780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.431 [2024-12-09 05:23:20.577787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.577800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.431 [2024-12-09 05:23:20.577807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.577819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.431 [2024-12-09 05:23:20.577826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.577838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.431 [2024-12-09 05:23:20.577845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.577857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.431 [2024-12-09 05:23:20.577863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.577876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.431 [2024-12-09 05:23:20.577882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.577894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.431 [2024-12-09 05:23:20.577901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.577918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.431 [2024-12-09 05:23:20.577925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.577937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.431 [2024-12-09 05:23:20.577943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.577956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.431 [2024-12-09 05:23:20.577964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.577976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.431 [2024-12-09 05:23:20.577983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.577995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.431 [2024-12-09 05:23:20.578002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.578014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.431 [2024-12-09 05:23:20.578021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.578033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.431 [2024-12-09 05:23:20.578040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.578052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.431 [2024-12-09 05:23:20.578059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.578071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.431 [2024-12-09 05:23:20.578077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.578090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.431 [2024-12-09 05:23:20.578097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.578109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.431 [2024-12-09 05:23:20.578116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.578128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.431 [2024-12-09 05:23:20.578135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.578150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.431 [2024-12-09 05:23:20.578157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.578170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.431 [2024-12-09 05:23:20.578176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.578188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.431 [2024-12-09 05:23:20.578195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.578207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.431 [2024-12-09 05:23:20.578214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.578226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.431 [2024-12-09 05:23:20.578233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.578245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.431 [2024-12-09 05:23:20.578252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.578264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.431 [2024-12-09 05:23:20.578270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.578283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.431 [2024-12-09 05:23:20.578289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.578752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.431 [2024-12-09 05:23:20.578767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.578781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.431 [2024-12-09 05:23:20.578789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.578805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.431 [2024-12-09 05:23:20.578812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:28:31.431 [2024-12-09 05:23:20.578824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.432 [2024-12-09 05:23:20.578831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.578843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.432 [2024-12-09 05:23:20.578858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.578871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.432 [2024-12-09 05:23:20.578878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.578890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.432 [2024-12-09 05:23:20.578897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.578910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.432 [2024-12-09 05:23:20.578916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.578928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.432 [2024-12-09 05:23:20.578935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.578947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.432 [2024-12-09 05:23:20.578954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.578966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.432 [2024-12-09 05:23:20.578973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.578985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.432 [2024-12-09 05:23:20.578992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.579004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.432 [2024-12-09 05:23:20.579011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.579023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.432 [2024-12-09 05:23:20.579030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.579042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.432 [2024-12-09 05:23:20.579049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.579884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.432 [2024-12-09 05:23:20.579899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.579913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.432 [2024-12-09 05:23:20.579927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.579939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.432 [2024-12-09 05:23:20.579946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.579960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.432 [2024-12-09 05:23:20.579967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.579979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.432 [2024-12-09 05:23:20.579986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.579998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.432 [2024-12-09 05:23:20.580005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.580017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.432 [2024-12-09 05:23:20.580024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.580037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.432 [2024-12-09 05:23:20.580044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.580056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.432 [2024-12-09 05:23:20.580063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.580075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:31.432 [2024-12-09 05:23:20.580082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.580094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.432 [2024-12-09 05:23:20.580101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:28:31.432 [2024-12-09 05:23:20.580113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:31.432 [2024-12-09 05:23:20.580120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:28:31.432 12754.81 IOPS, 49.82 MiB/s [2024-12-09T05:23:23.049Z] 12773.04 IOPS, 49.89 MiB/s [2024-12-09T05:23:23.049Z] Received shutdown signal, test time was about 28.513443 seconds 01:28:31.432 01:28:31.432 Latency(us) 01:28:31.432 [2024-12-09T05:23:23.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:28:31.432 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:28:31.432 Verification LBA range: start 0x0 length 0x4000 01:28:31.432 Nvme0n1 : 28.51 12776.51 49.91 0.00 0.00 9998.28 286.72 3071521.08 01:28:31.432 [2024-12-09T05:23:23.049Z] =================================================================================================================== 01:28:31.432 [2024-12-09T05:23:23.049Z] Total : 12776.51 49.91 0.00 0.00 9998.28 286.72 3071521.08 01:28:31.432 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:28:31.690 rmmod nvme_tcp 01:28:31.690 rmmod nvme_fabrics 01:28:31.690 rmmod nvme_keyring 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 88982 ']' 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 88982 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 88982 ']' 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 88982 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88982 01:28:31.690 killing process with pid 88982 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88982' 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 88982 01:28:31.690 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 88982 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 01:28:31.947 ************************************ 01:28:31.947 END TEST nvmf_host_multipath_status 01:28:31.947 ************************************ 01:28:31.947 01:28:31.947 real 0m33.773s 01:28:31.947 user 1m49.804s 01:28:31.947 sys 0m6.194s 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:31.947 05:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:28:32.205 ************************************ 01:28:32.205 START TEST nvmf_discovery_remove_ifc 01:28:32.205 ************************************ 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 01:28:32.205 * Looking for test storage... 01:28:32.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:28:32.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:32.205 --rc genhtml_branch_coverage=1 01:28:32.205 --rc genhtml_function_coverage=1 01:28:32.205 --rc genhtml_legend=1 01:28:32.205 --rc geninfo_all_blocks=1 01:28:32.205 --rc geninfo_unexecuted_blocks=1 01:28:32.205 01:28:32.205 ' 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:28:32.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:32.205 --rc genhtml_branch_coverage=1 01:28:32.205 --rc genhtml_function_coverage=1 01:28:32.205 --rc genhtml_legend=1 01:28:32.205 --rc geninfo_all_blocks=1 01:28:32.205 --rc geninfo_unexecuted_blocks=1 01:28:32.205 01:28:32.205 ' 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:28:32.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:32.205 --rc genhtml_branch_coverage=1 01:28:32.205 --rc genhtml_function_coverage=1 01:28:32.205 --rc genhtml_legend=1 01:28:32.205 --rc geninfo_all_blocks=1 01:28:32.205 --rc geninfo_unexecuted_blocks=1 01:28:32.205 01:28:32.205 ' 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:28:32.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:32.205 --rc genhtml_branch_coverage=1 01:28:32.205 --rc genhtml_function_coverage=1 01:28:32.205 --rc genhtml_legend=1 01:28:32.205 --rc geninfo_all_blocks=1 01:28:32.205 --rc geninfo_unexecuted_blocks=1 01:28:32.205 01:28:32.205 ' 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:28:32.205 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:28:32.205 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:28:32.206 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:28:32.206 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:28:32.206 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:28:32.206 Cannot find device "nvmf_init_br" 01:28:32.206 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 01:28:32.206 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:28:32.206 Cannot find device "nvmf_init_br2" 01:28:32.206 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 01:28:32.206 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:28:32.206 Cannot find device "nvmf_tgt_br" 01:28:32.206 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 01:28:32.206 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:28:32.206 Cannot find device "nvmf_tgt_br2" 01:28:32.206 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 01:28:32.206 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:28:32.206 Cannot find device "nvmf_init_br" 01:28:32.206 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 01:28:32.206 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:28:32.206 Cannot find device "nvmf_init_br2" 01:28:32.206 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 01:28:32.206 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:28:32.206 Cannot find device "nvmf_tgt_br" 01:28:32.206 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 01:28:32.206 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:28:32.206 Cannot find device "nvmf_tgt_br2" 01:28:32.206 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 01:28:32.206 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:28:32.206 Cannot find device "nvmf_br" 01:28:32.206 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 01:28:32.206 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:28:32.463 Cannot find device "nvmf_init_if" 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:28:32.463 Cannot find device "nvmf_init_if2" 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:28:32.463 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:28:32.463 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:28:32.463 05:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:28:32.463 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:28:32.463 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:28:32.463 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:28:32.463 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:28:32.463 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:28:32.463 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:28:32.463 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:28:32.463 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:28:32.463 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 01:28:32.463 01:28:32.463 --- 10.0.0.3 ping statistics --- 01:28:32.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:28:32.463 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 01:28:32.463 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:28:32.463 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:28:32.463 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 01:28:32.463 01:28:32.463 --- 10.0.0.4 ping statistics --- 01:28:32.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:28:32.463 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 01:28:32.463 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:28:32.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:28:32.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 01:28:32.463 01:28:32.464 --- 10.0.0.1 ping statistics --- 01:28:32.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:28:32.464 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:28:32.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:28:32.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 01:28:32.464 01:28:32.464 --- 10.0.0.2 ping statistics --- 01:28:32.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:28:32.464 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=90394 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 90394 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 90394 ']' 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 01:28:32.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 01:28:32.464 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:28:32.721 [2024-12-09 05:23:24.090023] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:28:32.721 [2024-12-09 05:23:24.090089] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:28:32.721 [2024-12-09 05:23:24.232171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:28:32.721 [2024-12-09 05:23:24.265606] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:28:32.721 [2024-12-09 05:23:24.265646] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:28:32.721 [2024-12-09 05:23:24.265652] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:28:32.721 [2024-12-09 05:23:24.265657] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:28:32.721 [2024-12-09 05:23:24.265661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:28:32.721 [2024-12-09 05:23:24.265910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:28:33.653 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:28:33.653 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 01:28:33.653 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:28:33.653 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 01:28:33.653 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:28:33.653 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:28:33.653 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 01:28:33.653 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:33.653 05:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:28:33.653 [2024-12-09 05:23:24.970815] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:28:33.653 [2024-12-09 05:23:24.978900] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 01:28:33.653 null0 01:28:33.653 [2024-12-09 05:23:25.010850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:28:33.653 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:33.653 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=90444 01:28:33.653 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 01:28:33.653 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 90444 /tmp/host.sock 01:28:33.653 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 90444 ']' 01:28:33.653 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 01:28:33.653 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 01:28:33.653 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 01:28:33.653 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 01:28:33.653 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 01:28:33.653 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:28:33.653 [2024-12-09 05:23:25.077020] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:28:33.653 [2024-12-09 05:23:25.077081] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90444 ] 01:28:33.653 [2024-12-09 05:23:25.213782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:28:33.653 [2024-12-09 05:23:25.244290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:34.589 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:28:34.589 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 01:28:34.589 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:28:34.589 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 01:28:34.589 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:34.589 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:28:34.589 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:34.589 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 01:28:34.589 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:34.589 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:28:34.589 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:34.589 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 01:28:34.589 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:34.589 05:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:28:35.521 [2024-12-09 05:23:26.953245] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:28:35.521 [2024-12-09 05:23:26.953273] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:28:35.521 [2024-12-09 05:23:26.953282] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:28:35.521 [2024-12-09 05:23:27.039351] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 01:28:35.521 [2024-12-09 05:23:27.093650] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 01:28:35.521 [2024-12-09 05:23:27.094216] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x102d190:1 started. 01:28:35.521 [2024-12-09 05:23:27.095406] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 01:28:35.521 [2024-12-09 05:23:27.095450] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 01:28:35.521 [2024-12-09 05:23:27.095466] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 01:28:35.521 [2024-12-09 05:23:27.095478] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 01:28:35.521 [2024-12-09 05:23:27.095494] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:28:35.521 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:35.521 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 01:28:35.521 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:28:35.521 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:28:35.521 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:28:35.521 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:28:35.521 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:28:35.521 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:35.521 [2024-12-09 05:23:27.101996] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x102d190 was disconnected and freed. delete nvme_qpair. 01:28:35.521 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:28:35.521 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:35.521 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 01:28:35.521 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 01:28:35.777 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 01:28:35.777 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 01:28:35.777 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:28:35.777 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:28:35.777 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:28:35.777 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:35.777 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:28:35.777 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:28:35.777 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:28:35.777 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:35.777 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:28:35.777 05:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:28:36.758 05:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:28:36.758 05:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:28:36.758 05:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:28:36.758 05:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:36.758 05:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:28:36.758 05:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:28:36.758 05:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:28:36.758 05:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:36.758 05:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:28:36.758 05:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:28:37.688 05:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:28:37.688 05:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:28:37.688 05:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:37.688 05:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:28:37.688 05:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:28:37.688 05:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:28:37.688 05:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:28:37.688 05:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:37.688 05:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:28:37.688 05:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:28:39.057 05:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:28:39.057 05:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:28:39.057 05:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:39.057 05:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:28:39.057 05:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:28:39.057 05:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:28:39.057 05:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:28:39.057 05:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:39.057 05:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:28:39.057 05:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:28:39.989 05:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:28:39.989 05:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:28:39.989 05:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:39.989 05:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:28:39.989 05:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:28:39.989 05:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:28:39.989 05:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:28:39.989 05:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:39.989 05:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:28:39.989 05:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:28:40.922 05:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:28:40.922 05:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:28:40.922 05:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:28:40.922 05:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:40.922 05:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:28:40.922 05:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:28:40.922 05:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:28:40.922 05:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:40.922 05:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:28:40.922 05:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:28:40.922 [2024-12-09 05:23:32.534074] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 01:28:40.922 [2024-12-09 05:23:32.534132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:28:40.922 [2024-12-09 05:23:32.534140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:40.922 [2024-12-09 05:23:32.534147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:28:40.922 [2024-12-09 05:23:32.534152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:40.922 [2024-12-09 05:23:32.534158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:28:40.923 [2024-12-09 05:23:32.534162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:40.923 [2024-12-09 05:23:32.534167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:28:40.923 [2024-12-09 05:23:32.534171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:40.923 [2024-12-09 05:23:32.534176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 01:28:40.923 [2024-12-09 05:23:32.534181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:40.923 [2024-12-09 05:23:32.534185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a530 is same with the state(6) to be set 01:28:41.181 [2024-12-09 05:23:32.544070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a530 (9): Bad file descriptor 01:28:41.181 [2024-12-09 05:23:32.554085] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:28:41.181 [2024-12-09 05:23:32.554102] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:28:41.181 [2024-12-09 05:23:32.554105] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:28:41.181 [2024-12-09 05:23:32.554108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:28:41.181 [2024-12-09 05:23:32.554127] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:28:42.113 05:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:28:42.113 05:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:28:42.113 05:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:28:42.114 05:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:28:42.114 05:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:28:42.114 05:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:42.114 05:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:28:42.114 [2024-12-09 05:23:33.602620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 01:28:42.114 [2024-12-09 05:23:33.602722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100a530 with addr=10.0.0.3, port=4420 01:28:42.114 [2024-12-09 05:23:33.602745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a530 is same with the state(6) to be set 01:28:42.114 [2024-12-09 05:23:33.602794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a530 (9): Bad file descriptor 01:28:42.114 [2024-12-09 05:23:33.603649] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 01:28:42.114 [2024-12-09 05:23:33.603705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:28:42.114 [2024-12-09 05:23:33.603719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:28:42.114 [2024-12-09 05:23:33.603734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:28:42.114 [2024-12-09 05:23:33.603747] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:28:42.114 [2024-12-09 05:23:33.603756] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:28:42.114 [2024-12-09 05:23:33.603764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:28:42.114 [2024-12-09 05:23:33.603777] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:28:42.114 [2024-12-09 05:23:33.603785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:28:42.114 05:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:42.114 05:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:28:42.114 05:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:28:43.045 [2024-12-09 05:23:34.603832] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:28:43.045 [2024-12-09 05:23:34.603869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:28:43.045 [2024-12-09 05:23:34.603882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:28:43.045 [2024-12-09 05:23:34.603889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:28:43.045 [2024-12-09 05:23:34.603895] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 01:28:43.045 [2024-12-09 05:23:34.603900] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:28:43.045 [2024-12-09 05:23:34.603905] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:28:43.045 [2024-12-09 05:23:34.603908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:28:43.045 [2024-12-09 05:23:34.603931] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 01:28:43.045 [2024-12-09 05:23:34.603968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:28:43.045 [2024-12-09 05:23:34.603977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:43.045 [2024-12-09 05:23:34.603986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:28:43.045 [2024-12-09 05:23:34.603992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:43.045 [2024-12-09 05:23:34.603999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:28:43.045 [2024-12-09 05:23:34.604004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:43.045 [2024-12-09 05:23:34.604011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:28:43.045 [2024-12-09 05:23:34.604017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:43.045 [2024-12-09 05:23:34.604023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 01:28:43.045 [2024-12-09 05:23:34.604029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:43.045 [2024-12-09 05:23:34.604035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 01:28:43.045 [2024-12-09 05:23:34.604737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf972e0 (9): Bad file descriptor 01:28:43.045 [2024-12-09 05:23:34.605744] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 01:28:43.045 [2024-12-09 05:23:34.605752] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 01:28:43.045 05:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:28:43.045 05:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:28:43.045 05:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:28:43.045 05:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:43.046 05:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:28:43.046 05:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:28:43.046 05:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:28:43.046 05:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:43.303 05:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 01:28:43.303 05:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:28:43.303 05:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:28:43.303 05:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 01:28:43.303 05:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:28:43.303 05:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:28:43.303 05:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:28:43.303 05:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:28:43.303 05:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:43.303 05:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:28:43.303 05:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:28:43.303 05:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:43.303 05:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 01:28:43.303 05:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:28:44.235 05:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:28:44.235 05:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:28:44.235 05:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:28:44.235 05:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:28:44.235 05:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:44.235 05:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:28:44.235 05:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:28:44.235 05:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:44.235 05:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 01:28:44.235 05:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:28:45.168 [2024-12-09 05:23:36.615481] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:28:45.168 [2024-12-09 05:23:36.615509] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:28:45.168 [2024-12-09 05:23:36.615520] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:28:45.168 [2024-12-09 05:23:36.702607] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 01:28:45.168 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:28:45.168 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:28:45.168 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:45.168 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:28:45.168 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:28:45.168 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:28:45.168 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:28:45.168 [2024-12-09 05:23:36.756945] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 01:28:45.168 [2024-12-09 05:23:36.757435] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1003680:1 started. 01:28:45.168 [2024-12-09 05:23:36.758512] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 01:28:45.168 [2024-12-09 05:23:36.758550] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 01:28:45.168 [2024-12-09 05:23:36.758578] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 01:28:45.168 [2024-12-09 05:23:36.758591] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 01:28:45.168 [2024-12-09 05:23:36.758598] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:28:45.168 [2024-12-09 05:23:36.764314] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1003680 was disconnected and freed. delete nvme_qpair. 01:28:45.168 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:45.426 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 01:28:45.426 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 01:28:45.426 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 90444 01:28:45.426 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 90444 ']' 01:28:45.426 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 90444 01:28:45.426 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 01:28:45.426 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:28:45.426 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90444 01:28:45.426 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:28:45.426 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:28:45.426 killing process with pid 90444 01:28:45.426 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90444' 01:28:45.426 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 90444 01:28:45.426 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 90444 01:28:45.426 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 01:28:45.426 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 01:28:45.426 05:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 01:28:45.992 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:28:45.992 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 01:28:45.992 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 01:28:45.992 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:28:45.992 rmmod nvme_tcp 01:28:45.992 rmmod nvme_fabrics 01:28:45.992 rmmod nvme_keyring 01:28:45.992 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:28:45.992 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 01:28:45.992 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 01:28:45.992 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 90394 ']' 01:28:45.992 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 90394 01:28:45.992 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 90394 ']' 01:28:45.992 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 90394 01:28:45.992 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 01:28:45.992 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:28:45.992 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90394 01:28:45.992 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:28:45.992 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:28:45.992 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90394' 01:28:45.992 killing process with pid 90394 01:28:45.992 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 90394 01:28:45.992 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 90394 01:28:46.250 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:28:46.250 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:28:46.250 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:28:46.250 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 01:28:46.250 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:28:46.250 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 01:28:46.250 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 01:28:46.250 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:28:46.250 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:28:46.250 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:28:46.250 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:28:46.250 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:28:46.251 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:28:46.251 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:28:46.251 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:28:46.251 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:28:46.251 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:28:46.251 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:28:46.251 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:28:46.251 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:28:46.251 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:28:46.508 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:28:46.508 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 01:28:46.508 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:28:46.508 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:28:46.508 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:28:46.508 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 01:28:46.508 01:28:46.508 real 0m14.363s 01:28:46.508 user 0m24.866s 01:28:46.508 sys 0m1.290s 01:28:46.508 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:46.508 05:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:28:46.508 ************************************ 01:28:46.508 END TEST nvmf_discovery_remove_ifc 01:28:46.508 ************************************ 01:28:46.509 05:23:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 01:28:46.509 05:23:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:28:46.509 05:23:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:46.509 05:23:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:28:46.509 ************************************ 01:28:46.509 START TEST nvmf_identify_kernel_target 01:28:46.509 ************************************ 01:28:46.509 05:23:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 01:28:46.509 * Looking for test storage... 01:28:46.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:28:46.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:46.509 --rc genhtml_branch_coverage=1 01:28:46.509 --rc genhtml_function_coverage=1 01:28:46.509 --rc genhtml_legend=1 01:28:46.509 --rc geninfo_all_blocks=1 01:28:46.509 --rc geninfo_unexecuted_blocks=1 01:28:46.509 01:28:46.509 ' 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:28:46.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:46.509 --rc genhtml_branch_coverage=1 01:28:46.509 --rc genhtml_function_coverage=1 01:28:46.509 --rc genhtml_legend=1 01:28:46.509 --rc geninfo_all_blocks=1 01:28:46.509 --rc geninfo_unexecuted_blocks=1 01:28:46.509 01:28:46.509 ' 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:28:46.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:46.509 --rc genhtml_branch_coverage=1 01:28:46.509 --rc genhtml_function_coverage=1 01:28:46.509 --rc genhtml_legend=1 01:28:46.509 --rc geninfo_all_blocks=1 01:28:46.509 --rc geninfo_unexecuted_blocks=1 01:28:46.509 01:28:46.509 ' 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:28:46.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:46.509 --rc genhtml_branch_coverage=1 01:28:46.509 --rc genhtml_function_coverage=1 01:28:46.509 --rc genhtml_legend=1 01:28:46.509 --rc geninfo_all_blocks=1 01:28:46.509 --rc geninfo_unexecuted_blocks=1 01:28:46.509 01:28:46.509 ' 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:28:46.509 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:28:46.768 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:28:46.768 Cannot find device "nvmf_init_br" 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:28:46.768 Cannot find device "nvmf_init_br2" 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:28:46.768 Cannot find device "nvmf_tgt_br" 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:28:46.768 Cannot find device "nvmf_tgt_br2" 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:28:46.768 Cannot find device "nvmf_init_br" 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:28:46.768 Cannot find device "nvmf_init_br2" 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:28:46.768 Cannot find device "nvmf_tgt_br" 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:28:46.768 Cannot find device "nvmf_tgt_br2" 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:28:46.768 Cannot find device "nvmf_br" 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:28:46.768 Cannot find device "nvmf_init_if" 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:28:46.768 Cannot find device "nvmf_init_if2" 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:28:46.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:28:46.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:28:46.768 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:28:47.026 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:28:47.026 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 01:28:47.026 01:28:47.026 --- 10.0.0.3 ping statistics --- 01:28:47.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:28:47.026 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:28:47.026 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:28:47.026 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 01:28:47.026 01:28:47.026 --- 10.0.0.4 ping statistics --- 01:28:47.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:28:47.026 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:28:47.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:28:47.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 01:28:47.026 01:28:47.026 --- 10.0.0.1 ping statistics --- 01:28:47.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:28:47.026 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:28:47.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:28:47.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 01:28:47.026 01:28:47.026 --- 10.0.0.2 ping statistics --- 01:28:47.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:28:47.026 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 01:28:47.026 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:28:47.283 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:28:47.283 Waiting for block devices as requested 01:28:47.283 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:28:47.283 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:28:47.283 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:28:47.283 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 01:28:47.283 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 01:28:47.283 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:28:47.283 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:28:47.283 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:28:47.283 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 01:28:47.283 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 01:28:47.284 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:28:47.541 No valid GPT data, bailing 01:28:47.541 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:28:47.541 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 01:28:47.541 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 01:28:47.541 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 01:28:47.541 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:28:47.541 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 01:28:47.541 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 01:28:47.541 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 01:28:47.541 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:28:47.541 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:28:47.541 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 01:28:47.541 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 01:28:47.541 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:28:47.541 No valid GPT data, bailing 01:28:47.541 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:28:47.541 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 01:28:47.541 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 01:28:47.541 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 01:28:47.542 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:28:47.542 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 01:28:47.542 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 01:28:47.542 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 01:28:47.542 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:28:47.542 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:28:47.542 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 01:28:47.542 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 01:28:47.542 05:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:28:47.542 No valid GPT data, bailing 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:28:47.542 No valid GPT data, bailing 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -a 10.0.0.1 -t tcp -s 4420 01:28:47.542 01:28:47.542 Discovery Log Number of Records 2, Generation counter 2 01:28:47.542 =====Discovery Log Entry 0====== 01:28:47.542 trtype: tcp 01:28:47.542 adrfam: ipv4 01:28:47.542 subtype: current discovery subsystem 01:28:47.542 treq: not specified, sq flow control disable supported 01:28:47.542 portid: 1 01:28:47.542 trsvcid: 4420 01:28:47.542 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:28:47.542 traddr: 10.0.0.1 01:28:47.542 eflags: none 01:28:47.542 sectype: none 01:28:47.542 =====Discovery Log Entry 1====== 01:28:47.542 trtype: tcp 01:28:47.542 adrfam: ipv4 01:28:47.542 subtype: nvme subsystem 01:28:47.542 treq: not specified, sq flow control disable supported 01:28:47.542 portid: 1 01:28:47.542 trsvcid: 4420 01:28:47.542 subnqn: nqn.2016-06.io.spdk:testnqn 01:28:47.542 traddr: 10.0.0.1 01:28:47.542 eflags: none 01:28:47.542 sectype: none 01:28:47.542 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 01:28:47.542 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 01:28:47.800 ===================================================== 01:28:47.800 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 01:28:47.800 ===================================================== 01:28:47.800 Controller Capabilities/Features 01:28:47.800 ================================ 01:28:47.800 Vendor ID: 0000 01:28:47.800 Subsystem Vendor ID: 0000 01:28:47.800 Serial Number: e4df9524df1526ac6dd4 01:28:47.800 Model Number: Linux 01:28:47.800 Firmware Version: 6.8.9-20 01:28:47.800 Recommended Arb Burst: 0 01:28:47.800 IEEE OUI Identifier: 00 00 00 01:28:47.800 Multi-path I/O 01:28:47.800 May have multiple subsystem ports: No 01:28:47.800 May have multiple controllers: No 01:28:47.800 Associated with SR-IOV VF: No 01:28:47.800 Max Data Transfer Size: Unlimited 01:28:47.800 Max Number of Namespaces: 0 01:28:47.800 Max Number of I/O Queues: 1024 01:28:47.800 NVMe Specification Version (VS): 1.3 01:28:47.800 NVMe Specification Version (Identify): 1.3 01:28:47.800 Maximum Queue Entries: 1024 01:28:47.800 Contiguous Queues Required: No 01:28:47.800 Arbitration Mechanisms Supported 01:28:47.800 Weighted Round Robin: Not Supported 01:28:47.800 Vendor Specific: Not Supported 01:28:47.800 Reset Timeout: 7500 ms 01:28:47.800 Doorbell Stride: 4 bytes 01:28:47.800 NVM Subsystem Reset: Not Supported 01:28:47.800 Command Sets Supported 01:28:47.800 NVM Command Set: Supported 01:28:47.800 Boot Partition: Not Supported 01:28:47.800 Memory Page Size Minimum: 4096 bytes 01:28:47.800 Memory Page Size Maximum: 4096 bytes 01:28:47.800 Persistent Memory Region: Not Supported 01:28:47.800 Optional Asynchronous Events Supported 01:28:47.800 Namespace Attribute Notices: Not Supported 01:28:47.800 Firmware Activation Notices: Not Supported 01:28:47.800 ANA Change Notices: Not Supported 01:28:47.800 PLE Aggregate Log Change Notices: Not Supported 01:28:47.800 LBA Status Info Alert Notices: Not Supported 01:28:47.800 EGE Aggregate Log Change Notices: Not Supported 01:28:47.800 Normal NVM Subsystem Shutdown event: Not Supported 01:28:47.800 Zone Descriptor Change Notices: Not Supported 01:28:47.800 Discovery Log Change Notices: Supported 01:28:47.800 Controller Attributes 01:28:47.800 128-bit Host Identifier: Not Supported 01:28:47.800 Non-Operational Permissive Mode: Not Supported 01:28:47.800 NVM Sets: Not Supported 01:28:47.800 Read Recovery Levels: Not Supported 01:28:47.800 Endurance Groups: Not Supported 01:28:47.800 Predictable Latency Mode: Not Supported 01:28:47.800 Traffic Based Keep ALive: Not Supported 01:28:47.800 Namespace Granularity: Not Supported 01:28:47.800 SQ Associations: Not Supported 01:28:47.800 UUID List: Not Supported 01:28:47.800 Multi-Domain Subsystem: Not Supported 01:28:47.800 Fixed Capacity Management: Not Supported 01:28:47.800 Variable Capacity Management: Not Supported 01:28:47.800 Delete Endurance Group: Not Supported 01:28:47.800 Delete NVM Set: Not Supported 01:28:47.800 Extended LBA Formats Supported: Not Supported 01:28:47.800 Flexible Data Placement Supported: Not Supported 01:28:47.800 01:28:47.800 Controller Memory Buffer Support 01:28:47.800 ================================ 01:28:47.800 Supported: No 01:28:47.800 01:28:47.800 Persistent Memory Region Support 01:28:47.801 ================================ 01:28:47.801 Supported: No 01:28:47.801 01:28:47.801 Admin Command Set Attributes 01:28:47.801 ============================ 01:28:47.801 Security Send/Receive: Not Supported 01:28:47.801 Format NVM: Not Supported 01:28:47.801 Firmware Activate/Download: Not Supported 01:28:47.801 Namespace Management: Not Supported 01:28:47.801 Device Self-Test: Not Supported 01:28:47.801 Directives: Not Supported 01:28:47.801 NVMe-MI: Not Supported 01:28:47.801 Virtualization Management: Not Supported 01:28:47.801 Doorbell Buffer Config: Not Supported 01:28:47.801 Get LBA Status Capability: Not Supported 01:28:47.801 Command & Feature Lockdown Capability: Not Supported 01:28:47.801 Abort Command Limit: 1 01:28:47.801 Async Event Request Limit: 1 01:28:47.801 Number of Firmware Slots: N/A 01:28:47.801 Firmware Slot 1 Read-Only: N/A 01:28:47.801 Firmware Activation Without Reset: N/A 01:28:47.801 Multiple Update Detection Support: N/A 01:28:47.801 Firmware Update Granularity: No Information Provided 01:28:47.801 Per-Namespace SMART Log: No 01:28:47.801 Asymmetric Namespace Access Log Page: Not Supported 01:28:47.801 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 01:28:47.801 Command Effects Log Page: Not Supported 01:28:47.801 Get Log Page Extended Data: Supported 01:28:47.801 Telemetry Log Pages: Not Supported 01:28:47.801 Persistent Event Log Pages: Not Supported 01:28:47.801 Supported Log Pages Log Page: May Support 01:28:47.801 Commands Supported & Effects Log Page: Not Supported 01:28:47.801 Feature Identifiers & Effects Log Page:May Support 01:28:47.801 NVMe-MI Commands & Effects Log Page: May Support 01:28:47.801 Data Area 4 for Telemetry Log: Not Supported 01:28:47.801 Error Log Page Entries Supported: 1 01:28:47.801 Keep Alive: Not Supported 01:28:47.801 01:28:47.801 NVM Command Set Attributes 01:28:47.801 ========================== 01:28:47.801 Submission Queue Entry Size 01:28:47.801 Max: 1 01:28:47.801 Min: 1 01:28:47.801 Completion Queue Entry Size 01:28:47.801 Max: 1 01:28:47.801 Min: 1 01:28:47.801 Number of Namespaces: 0 01:28:47.801 Compare Command: Not Supported 01:28:47.801 Write Uncorrectable Command: Not Supported 01:28:47.801 Dataset Management Command: Not Supported 01:28:47.801 Write Zeroes Command: Not Supported 01:28:47.801 Set Features Save Field: Not Supported 01:28:47.801 Reservations: Not Supported 01:28:47.801 Timestamp: Not Supported 01:28:47.801 Copy: Not Supported 01:28:47.801 Volatile Write Cache: Not Present 01:28:47.801 Atomic Write Unit (Normal): 1 01:28:47.801 Atomic Write Unit (PFail): 1 01:28:47.801 Atomic Compare & Write Unit: 1 01:28:47.801 Fused Compare & Write: Not Supported 01:28:47.801 Scatter-Gather List 01:28:47.801 SGL Command Set: Supported 01:28:47.801 SGL Keyed: Not Supported 01:28:47.801 SGL Bit Bucket Descriptor: Not Supported 01:28:47.801 SGL Metadata Pointer: Not Supported 01:28:47.801 Oversized SGL: Not Supported 01:28:47.801 SGL Metadata Address: Not Supported 01:28:47.801 SGL Offset: Supported 01:28:47.801 Transport SGL Data Block: Not Supported 01:28:47.801 Replay Protected Memory Block: Not Supported 01:28:47.801 01:28:47.801 Firmware Slot Information 01:28:47.801 ========================= 01:28:47.801 Active slot: 0 01:28:47.801 01:28:47.801 01:28:47.801 Error Log 01:28:47.801 ========= 01:28:47.801 01:28:47.801 Active Namespaces 01:28:47.801 ================= 01:28:47.801 Discovery Log Page 01:28:47.801 ================== 01:28:47.801 Generation Counter: 2 01:28:47.801 Number of Records: 2 01:28:47.801 Record Format: 0 01:28:47.801 01:28:47.801 Discovery Log Entry 0 01:28:47.801 ---------------------- 01:28:47.801 Transport Type: 3 (TCP) 01:28:47.801 Address Family: 1 (IPv4) 01:28:47.801 Subsystem Type: 3 (Current Discovery Subsystem) 01:28:47.801 Entry Flags: 01:28:47.801 Duplicate Returned Information: 0 01:28:47.801 Explicit Persistent Connection Support for Discovery: 0 01:28:47.801 Transport Requirements: 01:28:47.801 Secure Channel: Not Specified 01:28:47.801 Port ID: 1 (0x0001) 01:28:47.801 Controller ID: 65535 (0xffff) 01:28:47.801 Admin Max SQ Size: 32 01:28:47.801 Transport Service Identifier: 4420 01:28:47.801 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 01:28:47.801 Transport Address: 10.0.0.1 01:28:47.801 Discovery Log Entry 1 01:28:47.801 ---------------------- 01:28:47.801 Transport Type: 3 (TCP) 01:28:47.801 Address Family: 1 (IPv4) 01:28:47.801 Subsystem Type: 2 (NVM Subsystem) 01:28:47.801 Entry Flags: 01:28:47.801 Duplicate Returned Information: 0 01:28:47.801 Explicit Persistent Connection Support for Discovery: 0 01:28:47.801 Transport Requirements: 01:28:47.801 Secure Channel: Not Specified 01:28:47.801 Port ID: 1 (0x0001) 01:28:47.801 Controller ID: 65535 (0xffff) 01:28:47.801 Admin Max SQ Size: 32 01:28:47.801 Transport Service Identifier: 4420 01:28:47.801 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 01:28:47.801 Transport Address: 10.0.0.1 01:28:47.801 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:28:48.059 get_feature(0x01) failed 01:28:48.059 get_feature(0x02) failed 01:28:48.059 get_feature(0x04) failed 01:28:48.059 ===================================================== 01:28:48.059 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:28:48.059 ===================================================== 01:28:48.059 Controller Capabilities/Features 01:28:48.059 ================================ 01:28:48.059 Vendor ID: 0000 01:28:48.059 Subsystem Vendor ID: 0000 01:28:48.059 Serial Number: 462d751fce207ccd4320 01:28:48.059 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 01:28:48.059 Firmware Version: 6.8.9-20 01:28:48.059 Recommended Arb Burst: 6 01:28:48.059 IEEE OUI Identifier: 00 00 00 01:28:48.059 Multi-path I/O 01:28:48.059 May have multiple subsystem ports: Yes 01:28:48.059 May have multiple controllers: Yes 01:28:48.059 Associated with SR-IOV VF: No 01:28:48.059 Max Data Transfer Size: Unlimited 01:28:48.059 Max Number of Namespaces: 1024 01:28:48.059 Max Number of I/O Queues: 128 01:28:48.059 NVMe Specification Version (VS): 1.3 01:28:48.059 NVMe Specification Version (Identify): 1.3 01:28:48.059 Maximum Queue Entries: 1024 01:28:48.059 Contiguous Queues Required: No 01:28:48.059 Arbitration Mechanisms Supported 01:28:48.059 Weighted Round Robin: Not Supported 01:28:48.059 Vendor Specific: Not Supported 01:28:48.059 Reset Timeout: 7500 ms 01:28:48.059 Doorbell Stride: 4 bytes 01:28:48.059 NVM Subsystem Reset: Not Supported 01:28:48.059 Command Sets Supported 01:28:48.059 NVM Command Set: Supported 01:28:48.059 Boot Partition: Not Supported 01:28:48.059 Memory Page Size Minimum: 4096 bytes 01:28:48.059 Memory Page Size Maximum: 4096 bytes 01:28:48.059 Persistent Memory Region: Not Supported 01:28:48.059 Optional Asynchronous Events Supported 01:28:48.059 Namespace Attribute Notices: Supported 01:28:48.059 Firmware Activation Notices: Not Supported 01:28:48.059 ANA Change Notices: Supported 01:28:48.059 PLE Aggregate Log Change Notices: Not Supported 01:28:48.059 LBA Status Info Alert Notices: Not Supported 01:28:48.059 EGE Aggregate Log Change Notices: Not Supported 01:28:48.059 Normal NVM Subsystem Shutdown event: Not Supported 01:28:48.059 Zone Descriptor Change Notices: Not Supported 01:28:48.059 Discovery Log Change Notices: Not Supported 01:28:48.059 Controller Attributes 01:28:48.059 128-bit Host Identifier: Supported 01:28:48.059 Non-Operational Permissive Mode: Not Supported 01:28:48.059 NVM Sets: Not Supported 01:28:48.059 Read Recovery Levels: Not Supported 01:28:48.059 Endurance Groups: Not Supported 01:28:48.059 Predictable Latency Mode: Not Supported 01:28:48.059 Traffic Based Keep ALive: Supported 01:28:48.059 Namespace Granularity: Not Supported 01:28:48.059 SQ Associations: Not Supported 01:28:48.059 UUID List: Not Supported 01:28:48.059 Multi-Domain Subsystem: Not Supported 01:28:48.059 Fixed Capacity Management: Not Supported 01:28:48.059 Variable Capacity Management: Not Supported 01:28:48.059 Delete Endurance Group: Not Supported 01:28:48.059 Delete NVM Set: Not Supported 01:28:48.059 Extended LBA Formats Supported: Not Supported 01:28:48.059 Flexible Data Placement Supported: Not Supported 01:28:48.059 01:28:48.059 Controller Memory Buffer Support 01:28:48.059 ================================ 01:28:48.059 Supported: No 01:28:48.059 01:28:48.059 Persistent Memory Region Support 01:28:48.059 ================================ 01:28:48.059 Supported: No 01:28:48.059 01:28:48.059 Admin Command Set Attributes 01:28:48.059 ============================ 01:28:48.059 Security Send/Receive: Not Supported 01:28:48.059 Format NVM: Not Supported 01:28:48.059 Firmware Activate/Download: Not Supported 01:28:48.059 Namespace Management: Not Supported 01:28:48.059 Device Self-Test: Not Supported 01:28:48.059 Directives: Not Supported 01:28:48.059 NVMe-MI: Not Supported 01:28:48.059 Virtualization Management: Not Supported 01:28:48.059 Doorbell Buffer Config: Not Supported 01:28:48.059 Get LBA Status Capability: Not Supported 01:28:48.059 Command & Feature Lockdown Capability: Not Supported 01:28:48.059 Abort Command Limit: 4 01:28:48.059 Async Event Request Limit: 4 01:28:48.059 Number of Firmware Slots: N/A 01:28:48.059 Firmware Slot 1 Read-Only: N/A 01:28:48.059 Firmware Activation Without Reset: N/A 01:28:48.059 Multiple Update Detection Support: N/A 01:28:48.059 Firmware Update Granularity: No Information Provided 01:28:48.059 Per-Namespace SMART Log: Yes 01:28:48.059 Asymmetric Namespace Access Log Page: Supported 01:28:48.060 ANA Transition Time : 10 sec 01:28:48.060 01:28:48.060 Asymmetric Namespace Access Capabilities 01:28:48.060 ANA Optimized State : Supported 01:28:48.060 ANA Non-Optimized State : Supported 01:28:48.060 ANA Inaccessible State : Supported 01:28:48.060 ANA Persistent Loss State : Supported 01:28:48.060 ANA Change State : Supported 01:28:48.060 ANAGRPID is not changed : No 01:28:48.060 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 01:28:48.060 01:28:48.060 ANA Group Identifier Maximum : 128 01:28:48.060 Number of ANA Group Identifiers : 128 01:28:48.060 Max Number of Allowed Namespaces : 1024 01:28:48.060 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 01:28:48.060 Command Effects Log Page: Supported 01:28:48.060 Get Log Page Extended Data: Supported 01:28:48.060 Telemetry Log Pages: Not Supported 01:28:48.060 Persistent Event Log Pages: Not Supported 01:28:48.060 Supported Log Pages Log Page: May Support 01:28:48.060 Commands Supported & Effects Log Page: Not Supported 01:28:48.060 Feature Identifiers & Effects Log Page:May Support 01:28:48.060 NVMe-MI Commands & Effects Log Page: May Support 01:28:48.060 Data Area 4 for Telemetry Log: Not Supported 01:28:48.060 Error Log Page Entries Supported: 128 01:28:48.060 Keep Alive: Supported 01:28:48.060 Keep Alive Granularity: 1000 ms 01:28:48.060 01:28:48.060 NVM Command Set Attributes 01:28:48.060 ========================== 01:28:48.060 Submission Queue Entry Size 01:28:48.060 Max: 64 01:28:48.060 Min: 64 01:28:48.060 Completion Queue Entry Size 01:28:48.060 Max: 16 01:28:48.060 Min: 16 01:28:48.060 Number of Namespaces: 1024 01:28:48.060 Compare Command: Not Supported 01:28:48.060 Write Uncorrectable Command: Not Supported 01:28:48.060 Dataset Management Command: Supported 01:28:48.060 Write Zeroes Command: Supported 01:28:48.060 Set Features Save Field: Not Supported 01:28:48.060 Reservations: Not Supported 01:28:48.060 Timestamp: Not Supported 01:28:48.060 Copy: Not Supported 01:28:48.060 Volatile Write Cache: Present 01:28:48.060 Atomic Write Unit (Normal): 1 01:28:48.060 Atomic Write Unit (PFail): 1 01:28:48.060 Atomic Compare & Write Unit: 1 01:28:48.060 Fused Compare & Write: Not Supported 01:28:48.060 Scatter-Gather List 01:28:48.060 SGL Command Set: Supported 01:28:48.060 SGL Keyed: Not Supported 01:28:48.060 SGL Bit Bucket Descriptor: Not Supported 01:28:48.060 SGL Metadata Pointer: Not Supported 01:28:48.060 Oversized SGL: Not Supported 01:28:48.060 SGL Metadata Address: Not Supported 01:28:48.060 SGL Offset: Supported 01:28:48.060 Transport SGL Data Block: Not Supported 01:28:48.060 Replay Protected Memory Block: Not Supported 01:28:48.060 01:28:48.060 Firmware Slot Information 01:28:48.060 ========================= 01:28:48.060 Active slot: 0 01:28:48.060 01:28:48.060 Asymmetric Namespace Access 01:28:48.060 =========================== 01:28:48.060 Change Count : 0 01:28:48.060 Number of ANA Group Descriptors : 1 01:28:48.060 ANA Group Descriptor : 0 01:28:48.060 ANA Group ID : 1 01:28:48.060 Number of NSID Values : 1 01:28:48.060 Change Count : 0 01:28:48.060 ANA State : 1 01:28:48.060 Namespace Identifier : 1 01:28:48.060 01:28:48.060 Commands Supported and Effects 01:28:48.060 ============================== 01:28:48.060 Admin Commands 01:28:48.060 -------------- 01:28:48.060 Get Log Page (02h): Supported 01:28:48.060 Identify (06h): Supported 01:28:48.060 Abort (08h): Supported 01:28:48.060 Set Features (09h): Supported 01:28:48.060 Get Features (0Ah): Supported 01:28:48.060 Asynchronous Event Request (0Ch): Supported 01:28:48.060 Keep Alive (18h): Supported 01:28:48.060 I/O Commands 01:28:48.060 ------------ 01:28:48.060 Flush (00h): Supported 01:28:48.060 Write (01h): Supported LBA-Change 01:28:48.060 Read (02h): Supported 01:28:48.060 Write Zeroes (08h): Supported LBA-Change 01:28:48.060 Dataset Management (09h): Supported 01:28:48.060 01:28:48.060 Error Log 01:28:48.060 ========= 01:28:48.060 Entry: 0 01:28:48.060 Error Count: 0x3 01:28:48.060 Submission Queue Id: 0x0 01:28:48.060 Command Id: 0x5 01:28:48.060 Phase Bit: 0 01:28:48.060 Status Code: 0x2 01:28:48.060 Status Code Type: 0x0 01:28:48.060 Do Not Retry: 1 01:28:48.060 Error Location: 0x28 01:28:48.060 LBA: 0x0 01:28:48.060 Namespace: 0x0 01:28:48.060 Vendor Log Page: 0x0 01:28:48.060 ----------- 01:28:48.060 Entry: 1 01:28:48.060 Error Count: 0x2 01:28:48.060 Submission Queue Id: 0x0 01:28:48.060 Command Id: 0x5 01:28:48.060 Phase Bit: 0 01:28:48.060 Status Code: 0x2 01:28:48.060 Status Code Type: 0x0 01:28:48.060 Do Not Retry: 1 01:28:48.060 Error Location: 0x28 01:28:48.060 LBA: 0x0 01:28:48.060 Namespace: 0x0 01:28:48.060 Vendor Log Page: 0x0 01:28:48.060 ----------- 01:28:48.060 Entry: 2 01:28:48.060 Error Count: 0x1 01:28:48.060 Submission Queue Id: 0x0 01:28:48.060 Command Id: 0x4 01:28:48.060 Phase Bit: 0 01:28:48.060 Status Code: 0x2 01:28:48.060 Status Code Type: 0x0 01:28:48.060 Do Not Retry: 1 01:28:48.060 Error Location: 0x28 01:28:48.060 LBA: 0x0 01:28:48.060 Namespace: 0x0 01:28:48.060 Vendor Log Page: 0x0 01:28:48.060 01:28:48.060 Number of Queues 01:28:48.060 ================ 01:28:48.060 Number of I/O Submission Queues: 128 01:28:48.060 Number of I/O Completion Queues: 128 01:28:48.060 01:28:48.060 ZNS Specific Controller Data 01:28:48.060 ============================ 01:28:48.060 Zone Append Size Limit: 0 01:28:48.060 01:28:48.060 01:28:48.060 Active Namespaces 01:28:48.060 ================= 01:28:48.060 get_feature(0x05) failed 01:28:48.060 Namespace ID:1 01:28:48.060 Command Set Identifier: NVM (00h) 01:28:48.060 Deallocate: Supported 01:28:48.060 Deallocated/Unwritten Error: Not Supported 01:28:48.060 Deallocated Read Value: Unknown 01:28:48.060 Deallocate in Write Zeroes: Not Supported 01:28:48.060 Deallocated Guard Field: 0xFFFF 01:28:48.060 Flush: Supported 01:28:48.060 Reservation: Not Supported 01:28:48.060 Namespace Sharing Capabilities: Multiple Controllers 01:28:48.060 Size (in LBAs): 1310720 (5GiB) 01:28:48.060 Capacity (in LBAs): 1310720 (5GiB) 01:28:48.060 Utilization (in LBAs): 1310720 (5GiB) 01:28:48.060 UUID: ccb1045b-9996-4cc1-a0b2-3ebfdd505947 01:28:48.060 Thin Provisioning: Not Supported 01:28:48.060 Per-NS Atomic Units: Yes 01:28:48.060 Atomic Boundary Size (Normal): 0 01:28:48.060 Atomic Boundary Size (PFail): 0 01:28:48.060 Atomic Boundary Offset: 0 01:28:48.060 NGUID/EUI64 Never Reused: No 01:28:48.060 ANA group ID: 1 01:28:48.060 Namespace Write Protected: No 01:28:48.060 Number of LBA Formats: 1 01:28:48.060 Current LBA Format: LBA Format #00 01:28:48.060 LBA Format #00: Data Size: 4096 Metadata Size: 0 01:28:48.060 01:28:48.060 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 01:28:48.060 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 01:28:48.060 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 01:28:48.060 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:28:48.060 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 01:28:48.060 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 01:28:48.060 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:28:48.060 rmmod nvme_tcp 01:28:48.060 rmmod nvme_fabrics 01:28:48.060 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:28:48.060 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 01:28:48.060 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 01:28:48.060 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 01:28:48.060 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:28:48.060 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:28:48.060 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:28:48.060 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 01:28:48.060 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 01:28:48.060 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:28:48.060 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 01:28:48.060 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:28:48.060 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:28:48.060 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 01:28:48.319 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 01:28:48.576 05:23:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:28:49.140 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:28:49.140 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:28:49.140 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:28:49.140 ************************************ 01:28:49.140 END TEST nvmf_identify_kernel_target 01:28:49.140 ************************************ 01:28:49.140 01:28:49.140 real 0m2.747s 01:28:49.140 user 0m0.964s 01:28:49.140 sys 0m1.092s 01:28:49.140 05:23:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:49.140 05:23:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:28:49.397 ************************************ 01:28:49.397 START TEST nvmf_auth_host 01:28:49.397 ************************************ 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 01:28:49.397 * Looking for test storage... 01:28:49.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 01:28:49.397 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:28:49.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:49.398 --rc genhtml_branch_coverage=1 01:28:49.398 --rc genhtml_function_coverage=1 01:28:49.398 --rc genhtml_legend=1 01:28:49.398 --rc geninfo_all_blocks=1 01:28:49.398 --rc geninfo_unexecuted_blocks=1 01:28:49.398 01:28:49.398 ' 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:28:49.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:49.398 --rc genhtml_branch_coverage=1 01:28:49.398 --rc genhtml_function_coverage=1 01:28:49.398 --rc genhtml_legend=1 01:28:49.398 --rc geninfo_all_blocks=1 01:28:49.398 --rc geninfo_unexecuted_blocks=1 01:28:49.398 01:28:49.398 ' 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:28:49.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:49.398 --rc genhtml_branch_coverage=1 01:28:49.398 --rc genhtml_function_coverage=1 01:28:49.398 --rc genhtml_legend=1 01:28:49.398 --rc geninfo_all_blocks=1 01:28:49.398 --rc geninfo_unexecuted_blocks=1 01:28:49.398 01:28:49.398 ' 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:28:49.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:49.398 --rc genhtml_branch_coverage=1 01:28:49.398 --rc genhtml_function_coverage=1 01:28:49.398 --rc genhtml_legend=1 01:28:49.398 --rc geninfo_all_blocks=1 01:28:49.398 --rc geninfo_unexecuted_blocks=1 01:28:49.398 01:28:49.398 ' 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:28:49.398 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:28:49.398 Cannot find device "nvmf_init_br" 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:28:49.398 Cannot find device "nvmf_init_br2" 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:28:49.398 Cannot find device "nvmf_tgt_br" 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:28:49.398 Cannot find device "nvmf_tgt_br2" 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:28:49.398 Cannot find device "nvmf_init_br" 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 01:28:49.398 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:28:49.398 Cannot find device "nvmf_init_br2" 01:28:49.398 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 01:28:49.398 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:28:49.656 Cannot find device "nvmf_tgt_br" 01:28:49.656 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 01:28:49.656 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:28:49.656 Cannot find device "nvmf_tgt_br2" 01:28:49.656 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 01:28:49.656 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:28:49.656 Cannot find device "nvmf_br" 01:28:49.656 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 01:28:49.656 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:28:49.656 Cannot find device "nvmf_init_if" 01:28:49.656 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 01:28:49.656 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:28:49.656 Cannot find device "nvmf_init_if2" 01:28:49.656 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:28:49.657 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:28:49.657 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:28:49.657 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:28:49.657 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 01:28:49.657 01:28:49.657 --- 10.0.0.3 ping statistics --- 01:28:49.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:28:49.657 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:28:49.657 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:28:49.657 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 01:28:49.657 01:28:49.657 --- 10.0.0.4 ping statistics --- 01:28:49.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:28:49.657 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:28:49.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:28:49.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 01:28:49.657 01:28:49.657 --- 10.0.0.1 ping statistics --- 01:28:49.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:28:49.657 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:28:49.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:28:49.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 01:28:49.657 01:28:49.657 --- 10.0.0.2 ping statistics --- 01:28:49.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:28:49.657 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=91447 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 91447 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 91447 ']' 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 01:28:49.657 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9dbe3605fad9c94c44d7ccadaf7a08bd 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Fbb 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9dbe3605fad9c94c44d7ccadaf7a08bd 0 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9dbe3605fad9c94c44d7ccadaf7a08bd 0 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9dbe3605fad9c94c44d7ccadaf7a08bd 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 01:28:50.594 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Fbb 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Fbb 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Fbb 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=970afe3f741dbf0e1fb629cd3199dc4a89890520bb1946ce19277c4930898267 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.6I7 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 970afe3f741dbf0e1fb629cd3199dc4a89890520bb1946ce19277c4930898267 3 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 970afe3f741dbf0e1fb629cd3199dc4a89890520bb1946ce19277c4930898267 3 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=970afe3f741dbf0e1fb629cd3199dc4a89890520bb1946ce19277c4930898267 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.6I7 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.6I7 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.6I7 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=edabd2686061714ffc937940e232a56a551dfa5c7cc3c793 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.qnD 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key edabd2686061714ffc937940e232a56a551dfa5c7cc3c793 0 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 edabd2686061714ffc937940e232a56a551dfa5c7cc3c793 0 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=edabd2686061714ffc937940e232a56a551dfa5c7cc3c793 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.qnD 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.qnD 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.qnD 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0f0dae5445d4f78e28c11f99cac6a7f5a89583c89e38167a 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.VcA 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0f0dae5445d4f78e28c11f99cac6a7f5a89583c89e38167a 2 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0f0dae5445d4f78e28c11f99cac6a7f5a89583c89e38167a 2 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0f0dae5445d4f78e28c11f99cac6a7f5a89583c89e38167a 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.VcA 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.VcA 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.VcA 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=21ab10aac49d0b317c5e6628e9bcd6f7 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.vf2 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 21ab10aac49d0b317c5e6628e9bcd6f7 1 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 21ab10aac49d0b317c5e6628e9bcd6f7 1 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=21ab10aac49d0b317c5e6628e9bcd6f7 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.vf2 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.vf2 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.vf2 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=130c3561ad9db121678da7c7757fc104 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.QUc 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 130c3561ad9db121678da7c7757fc104 1 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 130c3561ad9db121678da7c7757fc104 1 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:28:50.853 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=130c3561ad9db121678da7c7757fc104 01:28:50.854 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 01:28:50.854 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:28:50.854 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.QUc 01:28:50.854 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.QUc 01:28:50.854 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.QUc 01:28:51.112 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 01:28:51.112 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:28:51.112 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:28:51.112 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:28:51.112 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 01:28:51.112 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 01:28:51.112 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:28:51.112 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=32699d3211e0fc401dbc15bafc4a7ee68c5d712be65a203f 01:28:51.112 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 01:28:51.112 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Rkz 01:28:51.112 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 32699d3211e0fc401dbc15bafc4a7ee68c5d712be65a203f 2 01:28:51.112 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 32699d3211e0fc401dbc15bafc4a7ee68c5d712be65a203f 2 01:28:51.112 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:28:51.112 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:28:51.112 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=32699d3211e0fc401dbc15bafc4a7ee68c5d712be65a203f 01:28:51.112 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 01:28:51.112 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:28:51.112 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Rkz 01:28:51.112 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Rkz 01:28:51.112 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Rkz 01:28:51.112 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0c70d4785917542055462e0f4a44237a 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.cOq 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0c70d4785917542055462e0f4a44237a 0 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0c70d4785917542055462e0f4a44237a 0 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0c70d4785917542055462e0f4a44237a 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.cOq 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.cOq 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.cOq 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=43b91d0dd4f99692e84fe8ebb1aef678e3c11ec87096dd347c3995f535731353 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Jfp 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 43b91d0dd4f99692e84fe8ebb1aef678e3c11ec87096dd347c3995f535731353 3 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 43b91d0dd4f99692e84fe8ebb1aef678e3c11ec87096dd347c3995f535731353 3 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=43b91d0dd4f99692e84fe8ebb1aef678e3c11ec87096dd347c3995f535731353 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Jfp 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Jfp 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Jfp 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 91447 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 91447 ']' 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 01:28:51.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 01:28:51.113 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Fbb 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.6I7 ]] 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6I7 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.qnD 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.VcA ]] 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VcA 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.vf2 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.QUc ]] 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QUc 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Rkz 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.cOq ]] 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.cOq 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Jfp 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:28:51.371 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 01:28:51.372 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 01:28:51.372 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 01:28:51.372 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:28:51.372 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:28:51.372 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:28:51.372 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 01:28:51.372 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 01:28:51.372 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 01:28:51.372 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 01:28:51.372 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:28:51.629 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:28:51.629 Waiting for block devices as requested 01:28:51.629 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:28:51.886 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:28:52.143 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:28:52.143 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 01:28:52.143 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 01:28:52.143 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:28:52.143 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:28:52.143 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:28:52.143 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 01:28:52.143 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 01:28:52.143 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:28:52.143 No valid GPT data, bailing 01:28:52.143 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:28:52.143 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 01:28:52.143 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 01:28:52.143 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 01:28:52.143 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:28:52.143 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 01:28:52.143 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 01:28:52.143 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 01:28:52.143 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:28:52.143 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:28:52.143 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 01:28:52.143 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 01:28:52.143 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:28:52.401 No valid GPT data, bailing 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:28:52.401 No valid GPT data, bailing 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:28:52.401 No valid GPT data, bailing 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -a 10.0.0.1 -t tcp -s 4420 01:28:52.401 01:28:52.401 Discovery Log Number of Records 2, Generation counter 2 01:28:52.401 =====Discovery Log Entry 0====== 01:28:52.401 trtype: tcp 01:28:52.401 adrfam: ipv4 01:28:52.401 subtype: current discovery subsystem 01:28:52.401 treq: not specified, sq flow control disable supported 01:28:52.401 portid: 1 01:28:52.401 trsvcid: 4420 01:28:52.401 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:28:52.401 traddr: 10.0.0.1 01:28:52.401 eflags: none 01:28:52.401 sectype: none 01:28:52.401 =====Discovery Log Entry 1====== 01:28:52.401 trtype: tcp 01:28:52.401 adrfam: ipv4 01:28:52.401 subtype: nvme subsystem 01:28:52.401 treq: not specified, sq flow control disable supported 01:28:52.401 portid: 1 01:28:52.401 trsvcid: 4420 01:28:52.401 subnqn: nqn.2024-02.io.spdk:cnode0 01:28:52.401 traddr: 10.0.0.1 01:28:52.401 eflags: none 01:28:52.401 sectype: none 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:28:52.401 05:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:28:52.658 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:28:52.658 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: ]] 01:28:52.658 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:52.659 nvme0n1 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: ]] 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:52.659 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:52.915 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:28:52.915 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:28:52.915 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:28:52.915 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:28:52.915 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:28:52.915 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:28:52.915 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:28:52.915 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:28:52.915 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:28:52.915 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:28:52.915 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:28:52.915 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:28:52.915 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:52.915 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:52.915 nvme0n1 01:28:52.915 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:52.915 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: ]] 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:52.916 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:53.174 nvme0n1 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: ]] 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:53.174 nvme0n1 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: ]] 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:53.174 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:53.434 nvme0n1 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:53.434 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:53.692 nvme0n1 01:28:53.692 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:53.692 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:28:53.692 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:53.692 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:53.692 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:28:53.692 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:53.692 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:28:53.692 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:28:53.692 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:53.692 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:53.692 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:53.692 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:28:53.692 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:28:53.692 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 01:28:53.692 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:28:53.692 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:28:53.692 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:28:53.692 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:28:53.692 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:28:53.692 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:28:53.692 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:28:53.692 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: ]] 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:53.949 nvme0n1 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:53.949 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:28:53.950 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:53.950 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:53.950 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:28:53.950 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: ]] 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:54.207 nvme0n1 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: ]] 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.207 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:54.464 nvme0n1 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: ]] 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.464 05:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:54.721 nvme0n1 01:28:54.721 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.721 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:28:54.721 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.721 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:28:54.721 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:54.722 nvme0n1 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:28:54.722 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: ]] 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:55.288 05:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:55.547 nvme0n1 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: ]] 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:55.547 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:55.806 nvme0n1 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: ]] 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:55.806 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:56.065 nvme0n1 01:28:56.065 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:56.065 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:28:56.065 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:56.065 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:56.065 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:28:56.065 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:56.065 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:28:56.065 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:28:56.065 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:56.065 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:56.065 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:56.065 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:28:56.065 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 01:28:56.065 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:28:56.065 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: ]] 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:56.066 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:56.326 nvme0n1 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:56.326 05:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:56.586 nvme0n1 01:28:56.586 05:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:56.586 05:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:28:56.586 05:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:28:56.586 05:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:56.586 05:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:56.586 05:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:56.586 05:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:28:56.586 05:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:28:56.586 05:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:56.586 05:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:56.586 05:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:56.586 05:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:28:56.586 05:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:28:56.586 05:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 01:28:56.586 05:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:28:56.586 05:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:28:56.586 05:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:28:56.586 05:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:28:56.586 05:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:28:56.586 05:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:28:56.586 05:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:28:56.586 05:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: ]] 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:58.492 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:58.752 nvme0n1 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: ]] 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:58.752 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:59.323 nvme0n1 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: ]] 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:59.323 05:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:59.895 nvme0n1 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: ]] 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:28:59.895 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:28:59.896 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:28:59.896 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:59.896 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:28:59.896 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:28:59.896 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:28:59.896 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:28:59.896 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:28:59.896 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:28:59.896 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:28:59.896 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:28:59.896 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:28:59.896 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:28:59.896 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:28:59.896 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:28:59.896 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:28:59.896 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:28:59.896 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:28:59.896 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:00.509 nvme0n1 01:29:00.509 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:00.509 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:00.509 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:00.510 05:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:01.078 nvme0n1 01:29:01.078 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:01.078 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:01.078 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:01.078 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:01.078 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:01.078 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:01.078 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:01.078 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:01.078 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:01.078 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:01.078 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:01.078 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:29:01.078 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:01.078 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 01:29:01.078 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:01.078 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:29:01.078 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: ]] 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:01.079 05:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:01.657 nvme0n1 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: ]] 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:29:01.657 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:01.658 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:02.595 nvme0n1 01:29:02.595 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:02.595 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:02.595 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:02.595 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:02.595 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:02.595 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:02.595 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:02.595 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:02.595 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:02.595 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: ]] 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:02.595 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:03.549 nvme0n1 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: ]] 01:29:03.549 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:03.550 05:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:04.131 nvme0n1 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:04.131 05:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.072 nvme0n1 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: ]] 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.072 nvme0n1 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:05.072 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: ]] 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.073 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.333 nvme0n1 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: ]] 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.334 nvme0n1 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: ]] 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.334 05:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.595 nvme0n1 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.595 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.856 nvme0n1 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: ]] 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:05.856 nvme0n1 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:05.856 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:06.117 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:06.117 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:06.117 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:06.117 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:06.117 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:06.117 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:06.117 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: ]] 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:06.118 nvme0n1 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: ]] 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:06.118 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:06.379 nvme0n1 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: ]] 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:06.379 05:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:06.640 nvme0n1 01:29:06.640 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:06.640 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:06.640 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:06.640 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:06.640 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:06.640 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:06.640 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:06.640 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:06.640 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:06.640 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:06.640 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:06.640 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:06.640 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 01:29:06.640 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:06.640 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:06.640 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:29:06.640 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:29:06.640 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:06.640 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:29:06.640 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:06.640 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:29:06.640 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:06.641 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:06.902 nvme0n1 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: ]] 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:06.902 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:07.167 nvme0n1 01:29:07.167 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:07.167 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:07.167 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:07.167 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:07.167 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:07.167 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:07.167 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: ]] 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:07.168 05:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:07.429 nvme0n1 01:29:07.429 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:07.429 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:07.429 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:07.429 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:07.429 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:07.429 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: ]] 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:07.690 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:07.949 nvme0n1 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: ]] 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:29:07.949 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:07.950 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:08.209 nvme0n1 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:08.209 05:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:08.467 nvme0n1 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:29:08.467 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: ]] 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:08.468 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:09.032 nvme0n1 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:09.032 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: ]] 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:09.033 05:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:09.598 nvme0n1 01:29:09.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:09.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:09.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:09.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:09.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:09.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:09.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:09.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:09.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:09.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:09.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:09.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:09.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 01:29:09.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:09.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:09.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:29:09.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:29:09.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:09.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:09.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:09.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:29:09.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: ]] 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:09.599 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:10.165 nvme0n1 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: ]] 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:10.165 05:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:10.732 nvme0n1 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:10.732 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:11.300 nvme0n1 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: ]] 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:11.300 05:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:12.233 nvme0n1 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: ]] 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:29:12.233 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:12.234 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:12.234 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:12.234 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:12.234 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:12.234 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:12.234 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:12.234 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:12.234 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:12.234 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:12.234 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:12.234 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:12.234 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:12.234 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:12.234 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:29:12.234 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:12.234 05:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:12.800 nvme0n1 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: ]] 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:29:12.800 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:29:12.801 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:12.801 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:29:12.801 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:12.801 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:12.801 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:12.801 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:12.801 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:12.801 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:12.801 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:12.801 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:12.801 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:12.801 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:12.801 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:12.801 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:12.801 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:12.801 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:12.801 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:29:12.801 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:12.801 05:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:13.792 nvme0n1 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: ]] 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:13.792 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:14.357 nvme0n1 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:14.357 05:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:15.292 nvme0n1 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: ]] 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:15.292 nvme0n1 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: ]] 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.292 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:15.551 nvme0n1 01:29:15.551 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.551 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:15.551 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.551 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:15.551 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:15.551 05:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: ]] 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.551 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:15.809 nvme0n1 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: ]] 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:15.809 nvme0n1 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:15.809 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:16.068 nvme0n1 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: ]] 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:16.068 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:16.326 nvme0n1 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: ]] 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:16.326 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:16.327 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:16.327 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:16.327 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:16.327 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:16.327 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:16.327 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:16.327 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:29:16.327 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:16.327 05:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:16.585 nvme0n1 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: ]] 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:16.585 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:16.844 nvme0n1 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: ]] 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:16.844 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:17.103 nvme0n1 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:17.103 nvme0n1 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:17.103 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.361 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:17.361 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:17.361 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: ]] 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.362 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:17.621 nvme0n1 01:29:17.621 05:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: ]] 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.621 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:17.880 nvme0n1 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: ]] 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:17.880 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:18.140 nvme0n1 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: ]] 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:18.140 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:18.399 nvme0n1 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:18.399 05:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:18.657 nvme0n1 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: ]] 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:18.657 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:19.223 nvme0n1 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: ]] 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:19.223 05:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:19.789 nvme0n1 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: ]] 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:19.789 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:20.355 nvme0n1 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: ]] 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:20.355 05:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:20.920 nvme0n1 01:29:20.920 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:20.920 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:20.920 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:20.920 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:20.920 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:20.920 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:20.920 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:20.920 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:20.921 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:21.486 nvme0n1 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiZTM2MDVmYWQ5Yzk0YzQ0ZDdjY2FkYWY3YTA4YmSa7yf8: 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: ]] 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTcwYWZlM2Y3NDFkYmYwZTFmYjYyOWNkMzE5OWRjNGE4OTg5MDUyMGJiMTk0NmNlMTkyNzdjNDkzMDg5ODI2Nxb+KdA=: 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:21.486 05:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:22.419 nvme0n1 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: ]] 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:22.419 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:22.420 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:22.420 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:22.420 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:22.420 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:22.420 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:22.420 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:22.420 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:22.420 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:22.420 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:22.420 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:29:22.420 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:22.420 05:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:22.987 nvme0n1 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: ]] 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:22.987 05:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:23.921 nvme0n1 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzI2OTlkMzIxMWUwZmM0MDFkYmMxNWJhZmM0YTdlZTY4YzVkNzEyYmU2NWEyMDNmAreE3A==: 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: ]] 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM3MGQ0Nzg1OTE3NTQyMDU1NDYyZTBmNGE0NDIzN2ETPWZI: 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:23.921 05:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:24.855 nvme0n1 01:29:24.855 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:24.855 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:24.855 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:24.855 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:24.855 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:24.855 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:24.855 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:24.855 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:24.855 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:24.855 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:24.855 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:24.855 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:29:24.855 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 01:29:24.855 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNiOTFkMGRkNGY5OTY5MmU4NGZlOGViYjFhZWY2NzhlM2MxMWVjODcwOTZkZDM0N2MzOTk1ZjUzNTczMTM1M+FNvyI=: 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:24.856 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:25.421 nvme0n1 01:29:25.421 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:25.422 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:29:25.422 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:29:25.422 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:25.422 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:25.422 05:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:25.422 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:25.422 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:29:25.422 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:25.422 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:25.422 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:25.422 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:29:25.422 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:25.422 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:29:25.422 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:29:25.422 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:29:25.422 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:25.422 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:25.422 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:29:25.422 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:29:25.422 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:25.422 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: ]] 01:29:25.422 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:25.422 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:29:25.422 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:25.422 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:25.422 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:25.680 2024/12/09 05:24:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:29:25.680 request: 01:29:25.680 { 01:29:25.680 "method": "bdev_nvme_attach_controller", 01:29:25.680 "params": { 01:29:25.680 "name": "nvme0", 01:29:25.680 "trtype": "tcp", 01:29:25.680 "traddr": "10.0.0.1", 01:29:25.680 "adrfam": "ipv4", 01:29:25.680 "trsvcid": "4420", 01:29:25.680 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:29:25.680 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:29:25.680 "prchk_reftag": false, 01:29:25.680 "prchk_guard": false, 01:29:25.680 "hdgst": false, 01:29:25.680 "ddgst": false, 01:29:25.680 "allow_unrecognized_csi": false 01:29:25.680 } 01:29:25.680 } 01:29:25.680 Got JSON-RPC error response 01:29:25.680 GoRPCClient: error on JSON-RPC call 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:25.680 2024/12/09 05:24:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:29:25.680 request: 01:29:25.680 { 01:29:25.680 "method": "bdev_nvme_attach_controller", 01:29:25.680 "params": { 01:29:25.680 "name": "nvme0", 01:29:25.680 "trtype": "tcp", 01:29:25.680 "traddr": "10.0.0.1", 01:29:25.680 "adrfam": "ipv4", 01:29:25.680 "trsvcid": "4420", 01:29:25.680 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:29:25.680 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:29:25.680 "prchk_reftag": false, 01:29:25.680 "prchk_guard": false, 01:29:25.680 "hdgst": false, 01:29:25.680 "ddgst": false, 01:29:25.680 "dhchap_key": "key2", 01:29:25.680 "allow_unrecognized_csi": false 01:29:25.680 } 01:29:25.680 } 01:29:25.680 Got JSON-RPC error response 01:29:25.680 GoRPCClient: error on JSON-RPC call 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:29:25.680 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:25.681 2024/12/09 05:24:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:29:25.681 request: 01:29:25.681 { 01:29:25.681 "method": "bdev_nvme_attach_controller", 01:29:25.681 "params": { 01:29:25.681 "name": "nvme0", 01:29:25.681 "trtype": "tcp", 01:29:25.681 "traddr": "10.0.0.1", 01:29:25.681 "adrfam": "ipv4", 01:29:25.681 "trsvcid": "4420", 01:29:25.681 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:29:25.681 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:29:25.681 "prchk_reftag": false, 01:29:25.681 "prchk_guard": false, 01:29:25.681 "hdgst": false, 01:29:25.681 "ddgst": false, 01:29:25.681 "dhchap_key": "key1", 01:29:25.681 "dhchap_ctrlr_key": "ckey2", 01:29:25.681 "allow_unrecognized_csi": false 01:29:25.681 } 01:29:25.681 } 01:29:25.681 Got JSON-RPC error response 01:29:25.681 GoRPCClient: error on JSON-RPC call 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:25.681 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:25.938 nvme0n1 01:29:25.938 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:25.938 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 01:29:25.938 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:25.938 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:29:25.938 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:29:25.938 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:29:25.938 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:25.938 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:25.938 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:29:25.938 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:29:25.938 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:25.938 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: ]] 01:29:25.938 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:25.938 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:29:25.938 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:25.938 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:25.938 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:25.938 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:25.939 2024/12/09 05:24:17 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 01:29:25.939 request: 01:29:25.939 { 01:29:25.939 "method": "bdev_nvme_set_keys", 01:29:25.939 "params": { 01:29:25.939 "name": "nvme0", 01:29:25.939 "dhchap_key": "key1", 01:29:25.939 "dhchap_ctrlr_key": "ckey2" 01:29:25.939 } 01:29:25.939 } 01:29:25.939 Got JSON-RPC error response 01:29:25.939 GoRPCClient: error on JSON-RPC call 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 01:29:25.939 05:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRhYmQyNjg2MDYxNzE0ZmZjOTM3OTQwZTIzMmE1NmE1NTFkZmE1YzdjYzNjNzkzZTE98g==: 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: ]] 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYwZGFlNTQ0NWQ0Zjc4ZTI4YzExZjk5Y2FjNmE3ZjVhODk1ODNjODllMzgxNjdhabO6JA==: 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:27.315 nvme0n1 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjFhYjEwYWFjNDlkMGIzMTdjNWU2NjI4ZTliY2Q2Zjc16bSx: 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: ]] 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMwYzM1NjFhZDlkYjEyMTY3OGRhN2M3NzU3ZmMxMDRogley: 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:27.315 2024/12/09 05:24:18 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 01:29:27.315 request: 01:29:27.315 { 01:29:27.315 "method": "bdev_nvme_set_keys", 01:29:27.315 "params": { 01:29:27.315 "name": "nvme0", 01:29:27.315 "dhchap_key": "key2", 01:29:27.315 "dhchap_ctrlr_key": "ckey1" 01:29:27.315 } 01:29:27.315 } 01:29:27.315 Got JSON-RPC error response 01:29:27.315 GoRPCClient: error on JSON-RPC call 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 01:29:27.315 05:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:29:28.246 rmmod nvme_tcp 01:29:28.246 rmmod nvme_fabrics 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 91447 ']' 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 91447 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 91447 ']' 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 91447 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:29:28.246 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91447 01:29:28.503 killing process with pid 91447 01:29:28.503 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:29:28.503 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:29:28.503 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91447' 01:29:28.503 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 91447 01:29:28.503 05:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 91447 01:29:28.503 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:29:28.503 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:29:28.503 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:29:28.503 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 01:29:28.503 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 01:29:28.503 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:29:28.503 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 01:29:28.503 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:29:28.503 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:29:28.503 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:29:28.503 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:29:28.503 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:29:28.503 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:29:28.503 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:29:28.503 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:29:28.503 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:29:28.503 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:29:28.503 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:29:28.761 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:29:28.761 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:29:28.761 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:29:28.761 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:29:28.761 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 01:29:28.761 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:29:28.761 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:29:28.761 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:29:28.761 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 01:29:28.761 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 01:29:28.761 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:29:28.761 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 01:29:28.761 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 01:29:28.761 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 01:29:28.762 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 01:29:28.762 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:29:28.762 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:29:28.762 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:29:28.762 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 01:29:28.762 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 01:29:28.762 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:29:29.327 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:29:29.327 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:29:29.585 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:29:29.585 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Fbb /tmp/spdk.key-null.qnD /tmp/spdk.key-sha256.vf2 /tmp/spdk.key-sha384.Rkz /tmp/spdk.key-sha512.Jfp /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 01:29:29.585 05:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:29:29.843 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:29:29.843 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:29:29.843 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:29:29.843 01:29:29.843 real 0m40.511s 01:29:29.843 user 0m30.741s 01:29:29.843 sys 0m3.358s 01:29:29.843 ************************************ 01:29:29.843 END TEST nvmf_auth_host 01:29:29.843 ************************************ 01:29:29.843 05:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:29.843 05:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:29:29.843 05:24:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 01:29:29.843 05:24:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 01:29:29.843 05:24:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:29:29.843 05:24:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:29.843 05:24:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:29:29.843 ************************************ 01:29:29.843 START TEST nvmf_digest 01:29:29.843 ************************************ 01:29:29.843 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 01:29:29.843 * Looking for test storage... 01:29:29.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:29:29.843 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:29:29.843 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 01:29:29.843 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:29:29.843 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:29:29.843 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:29:29.843 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 01:29:29.843 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 01:29:29.843 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 01:29:29.843 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 01:29:29.843 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 01:29:29.843 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 01:29:29.843 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:29:29.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:29:29.844 --rc genhtml_branch_coverage=1 01:29:29.844 --rc genhtml_function_coverage=1 01:29:29.844 --rc genhtml_legend=1 01:29:29.844 --rc geninfo_all_blocks=1 01:29:29.844 --rc geninfo_unexecuted_blocks=1 01:29:29.844 01:29:29.844 ' 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:29:29.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:29:29.844 --rc genhtml_branch_coverage=1 01:29:29.844 --rc genhtml_function_coverage=1 01:29:29.844 --rc genhtml_legend=1 01:29:29.844 --rc geninfo_all_blocks=1 01:29:29.844 --rc geninfo_unexecuted_blocks=1 01:29:29.844 01:29:29.844 ' 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:29:29.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:29:29.844 --rc genhtml_branch_coverage=1 01:29:29.844 --rc genhtml_function_coverage=1 01:29:29.844 --rc genhtml_legend=1 01:29:29.844 --rc geninfo_all_blocks=1 01:29:29.844 --rc geninfo_unexecuted_blocks=1 01:29:29.844 01:29:29.844 ' 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:29:29.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:29:29.844 --rc genhtml_branch_coverage=1 01:29:29.844 --rc genhtml_function_coverage=1 01:29:29.844 --rc genhtml_legend=1 01:29:29.844 --rc geninfo_all_blocks=1 01:29:29.844 --rc geninfo_unexecuted_blocks=1 01:29:29.844 01:29:29.844 ' 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:29:29.844 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:29:30.101 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:29:30.101 Cannot find device "nvmf_init_br" 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:29:30.101 Cannot find device "nvmf_init_br2" 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:29:30.101 Cannot find device "nvmf_tgt_br" 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:29:30.101 Cannot find device "nvmf_tgt_br2" 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:29:30.101 Cannot find device "nvmf_init_br" 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:29:30.101 Cannot find device "nvmf_init_br2" 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:29:30.101 Cannot find device "nvmf_tgt_br" 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:29:30.101 Cannot find device "nvmf_tgt_br2" 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:29:30.101 Cannot find device "nvmf_br" 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:29:30.101 Cannot find device "nvmf_init_if" 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:29:30.101 Cannot find device "nvmf_init_if2" 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:29:30.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:29:30.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:29:30.101 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:29:30.102 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:29:30.102 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:29:30.102 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:29:30.102 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:29:30.102 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:29:30.102 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:29:30.102 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:29:30.102 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:29:30.102 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:29:30.102 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:29:30.102 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:29:30.102 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:29:30.102 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:29:30.102 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:29:30.102 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:29:30.360 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:29:30.360 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 01:29:30.360 01:29:30.360 --- 10.0.0.3 ping statistics --- 01:29:30.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:29:30.360 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:29:30.360 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:29:30.360 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 01:29:30.360 01:29:30.360 --- 10.0.0.4 ping statistics --- 01:29:30.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:29:30.360 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:29:30.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:29:30.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 01:29:30.360 01:29:30.360 --- 10.0.0.1 ping statistics --- 01:29:30.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:29:30.360 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:29:30.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:29:30.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 01:29:30.360 01:29:30.360 --- 10.0.0.2 ping statistics --- 01:29:30.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:29:30.360 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:29:30.360 ************************************ 01:29:30.360 START TEST nvmf_digest_clean 01:29:30.360 ************************************ 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=93166 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 93166 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93166 ']' 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:29:30.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:29:30.360 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 01:29:30.360 [2024-12-09 05:24:21.814145] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:29:30.360 [2024-12-09 05:24:21.814204] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:29:30.360 [2024-12-09 05:24:21.952573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:29:30.618 [2024-12-09 05:24:21.986740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:29:30.618 [2024-12-09 05:24:21.986913] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:29:30.618 [2024-12-09 05:24:21.986924] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:29:30.618 [2024-12-09 05:24:21.986929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:29:30.618 [2024-12-09 05:24:21.986934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:29:30.618 [2024-12-09 05:24:21.987184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:29:31.184 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:29:31.184 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:29:31.184 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:29:31.184 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 01:29:31.184 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:29:31.184 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:29:31.184 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 01:29:31.184 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 01:29:31.184 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 01:29:31.184 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:31.184 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:29:31.184 null0 01:29:31.444 [2024-12-09 05:24:22.799753] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:29:31.444 [2024-12-09 05:24:22.823829] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:29:31.444 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:31.444 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 01:29:31.444 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:29:31.444 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:29:31.444 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 01:29:31.444 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 01:29:31.444 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 01:29:31.444 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:29:31.444 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93215 01:29:31.444 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93215 /var/tmp/bperf.sock 01:29:31.444 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93215 ']' 01:29:31.444 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 01:29:31.444 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:29:31.444 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:29:31.444 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:29:31.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:29:31.444 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:29:31.444 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:29:31.444 [2024-12-09 05:24:22.870344] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:29:31.444 [2024-12-09 05:24:22.870397] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93215 ] 01:29:31.444 [2024-12-09 05:24:23.006657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:29:31.444 [2024-12-09 05:24:23.042276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:29:32.070 05:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:29:32.070 05:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:29:32.070 05:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:29:32.070 05:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:29:32.070 05:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:29:32.329 05:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:29:32.329 05:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:29:32.588 nvme0n1 01:29:32.588 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:29:32.588 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:29:32.588 Running I/O for 2 seconds... 01:29:34.929 19261.00 IOPS, 75.24 MiB/s [2024-12-09T05:24:26.546Z] 21516.50 IOPS, 84.05 MiB/s 01:29:34.929 Latency(us) 01:29:34.929 [2024-12-09T05:24:26.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:34.929 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:29:34.929 nvme0n1 : 2.00 21547.39 84.17 0.00 0.00 5935.07 2571.03 12451.84 01:29:34.929 [2024-12-09T05:24:26.546Z] =================================================================================================================== 01:29:34.929 [2024-12-09T05:24:26.546Z] Total : 21547.39 84.17 0.00 0.00 5935.07 2571.03 12451.84 01:29:34.929 { 01:29:34.929 "results": [ 01:29:34.929 { 01:29:34.929 "job": "nvme0n1", 01:29:34.929 "core_mask": "0x2", 01:29:34.929 "workload": "randread", 01:29:34.929 "status": "finished", 01:29:34.929 "queue_depth": 128, 01:29:34.929 "io_size": 4096, 01:29:34.929 "runtime": 2.003073, 01:29:34.929 "iops": 21547.392431528955, 01:29:34.929 "mibps": 84.16950168565998, 01:29:34.929 "io_failed": 0, 01:29:34.929 "io_timeout": 0, 01:29:34.929 "avg_latency_us": 5935.065477701557, 01:29:34.929 "min_latency_us": 2571.027692307692, 01:29:34.929 "max_latency_us": 12451.84 01:29:34.929 } 01:29:34.929 ], 01:29:34.929 "core_count": 1 01:29:34.929 } 01:29:34.929 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:29:34.929 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:29:34.929 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:29:34.929 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:29:34.929 | select(.opcode=="crc32c") 01:29:34.929 | "\(.module_name) \(.executed)"' 01:29:34.929 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:29:34.929 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:29:34.929 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:29:34.929 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:29:34.929 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:29:34.929 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93215 01:29:34.929 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93215 ']' 01:29:34.929 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93215 01:29:34.929 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:29:34.929 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:29:34.929 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93215 01:29:34.929 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:29:34.929 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:29:34.929 killing process with pid 93215 01:29:34.929 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93215' 01:29:34.929 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93215 01:29:34.929 Received shutdown signal, test time was about 2.000000 seconds 01:29:34.929 01:29:34.929 Latency(us) 01:29:34.929 [2024-12-09T05:24:26.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:34.929 [2024-12-09T05:24:26.546Z] =================================================================================================================== 01:29:34.929 [2024-12-09T05:24:26.547Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:29:34.930 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93215 01:29:34.930 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 01:29:34.930 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:29:34.930 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:29:34.930 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 01:29:34.930 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 01:29:34.930 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 01:29:34.930 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:29:34.930 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 01:29:34.930 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93302 01:29:34.930 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93302 /var/tmp/bperf.sock 01:29:34.930 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93302 ']' 01:29:34.930 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:29:34.930 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:29:34.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:29:34.930 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:29:34.930 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:29:34.930 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:29:34.930 I/O size of 131072 is greater than zero copy threshold (65536). 01:29:34.930 Zero copy mechanism will not be used. 01:29:34.930 [2024-12-09 05:24:26.527451] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:29:34.930 [2024-12-09 05:24:26.527498] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93302 ] 01:29:35.190 [2024-12-09 05:24:26.661061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:29:35.190 [2024-12-09 05:24:26.690048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:29:36.131 05:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:29:36.131 05:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:29:36.131 05:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:29:36.131 05:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:29:36.131 05:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:29:36.131 05:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:29:36.131 05:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:29:36.392 nvme0n1 01:29:36.392 05:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:29:36.392 05:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:29:36.653 I/O size of 131072 is greater than zero copy threshold (65536). 01:29:36.653 Zero copy mechanism will not be used. 01:29:36.653 Running I/O for 2 seconds... 01:29:38.519 13274.00 IOPS, 1659.25 MiB/s [2024-12-09T05:24:30.136Z] 13560.00 IOPS, 1695.00 MiB/s 01:29:38.519 Latency(us) 01:29:38.519 [2024-12-09T05:24:30.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:38.519 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 01:29:38.519 nvme0n1 : 2.00 13551.74 1693.97 0.00 0.00 1178.15 428.50 7309.78 01:29:38.519 [2024-12-09T05:24:30.136Z] =================================================================================================================== 01:29:38.519 [2024-12-09T05:24:30.136Z] Total : 13551.74 1693.97 0.00 0.00 1178.15 428.50 7309.78 01:29:38.519 { 01:29:38.519 "results": [ 01:29:38.519 { 01:29:38.519 "job": "nvme0n1", 01:29:38.519 "core_mask": "0x2", 01:29:38.519 "workload": "randread", 01:29:38.519 "status": "finished", 01:29:38.519 "queue_depth": 16, 01:29:38.519 "io_size": 131072, 01:29:38.519 "runtime": 2.002399, 01:29:38.519 "iops": 13551.744682253637, 01:29:38.519 "mibps": 1693.9680852817046, 01:29:38.519 "io_failed": 0, 01:29:38.519 "io_timeout": 0, 01:29:38.519 "avg_latency_us": 1178.1530333817127, 01:29:38.519 "min_latency_us": 428.50461538461536, 01:29:38.519 "max_latency_us": 7309.784615384616 01:29:38.519 } 01:29:38.519 ], 01:29:38.519 "core_count": 1 01:29:38.519 } 01:29:38.519 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:29:38.519 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:29:38.519 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:29:38.519 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:29:38.519 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:29:38.519 | select(.opcode=="crc32c") 01:29:38.519 | "\(.module_name) \(.executed)"' 01:29:38.777 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:29:38.777 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:29:38.777 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:29:38.777 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:29:38.777 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93302 01:29:38.777 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93302 ']' 01:29:38.777 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93302 01:29:38.777 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:29:38.777 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:29:38.777 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93302 01:29:38.777 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:29:38.777 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:29:38.777 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93302' 01:29:38.777 killing process with pid 93302 01:29:38.777 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93302 01:29:38.777 Received shutdown signal, test time was about 2.000000 seconds 01:29:38.777 01:29:38.777 Latency(us) 01:29:38.777 [2024-12-09T05:24:30.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:38.777 [2024-12-09T05:24:30.394Z] =================================================================================================================== 01:29:38.777 [2024-12-09T05:24:30.394Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:29:38.777 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93302 01:29:39.035 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 01:29:39.035 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:29:39.035 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:29:39.035 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 01:29:39.035 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 01:29:39.035 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 01:29:39.035 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:29:39.035 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93381 01:29:39.035 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 01:29:39.035 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93381 /var/tmp/bperf.sock 01:29:39.035 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93381 ']' 01:29:39.035 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:29:39.035 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:29:39.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:29:39.035 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:29:39.035 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:29:39.035 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:29:39.035 [2024-12-09 05:24:30.436051] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:29:39.035 [2024-12-09 05:24:30.436100] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93381 ] 01:29:39.035 [2024-12-09 05:24:30.569129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:29:39.035 [2024-12-09 05:24:30.598780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:29:39.969 05:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:29:39.969 05:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:29:39.969 05:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:29:39.969 05:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:29:39.969 05:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:29:39.969 05:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:29:39.969 05:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:29:40.226 nvme0n1 01:29:40.483 05:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:29:40.483 05:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:29:40.483 Running I/O for 2 seconds... 01:29:42.346 29316.00 IOPS, 114.52 MiB/s [2024-12-09T05:24:33.963Z] 29415.50 IOPS, 114.90 MiB/s 01:29:42.346 Latency(us) 01:29:42.346 [2024-12-09T05:24:33.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:42.346 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:29:42.346 nvme0n1 : 2.00 29439.93 115.00 0.00 0.00 4343.17 2205.54 11292.36 01:29:42.346 [2024-12-09T05:24:33.963Z] =================================================================================================================== 01:29:42.346 [2024-12-09T05:24:33.963Z] Total : 29439.93 115.00 0.00 0.00 4343.17 2205.54 11292.36 01:29:42.346 { 01:29:42.346 "results": [ 01:29:42.346 { 01:29:42.346 "job": "nvme0n1", 01:29:42.346 "core_mask": "0x2", 01:29:42.346 "workload": "randwrite", 01:29:42.346 "status": "finished", 01:29:42.346 "queue_depth": 128, 01:29:42.346 "io_size": 4096, 01:29:42.346 "runtime": 2.002688, 01:29:42.346 "iops": 29439.932730410328, 01:29:42.346 "mibps": 114.99973722816534, 01:29:42.346 "io_failed": 0, 01:29:42.346 "io_timeout": 0, 01:29:42.346 "avg_latency_us": 4343.165151480755, 01:29:42.346 "min_latency_us": 2205.5384615384614, 01:29:42.346 "max_latency_us": 11292.356923076923 01:29:42.346 } 01:29:42.346 ], 01:29:42.346 "core_count": 1 01:29:42.346 } 01:29:42.346 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:29:42.346 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:29:42.346 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:29:42.346 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:29:42.346 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:29:42.346 | select(.opcode=="crc32c") 01:29:42.346 | "\(.module_name) \(.executed)"' 01:29:42.603 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:29:42.603 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:29:42.603 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:29:42.603 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:29:42.603 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93381 01:29:42.603 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93381 ']' 01:29:42.603 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93381 01:29:42.603 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:29:42.603 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:29:42.603 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93381 01:29:42.603 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:29:42.603 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:29:42.603 killing process with pid 93381 01:29:42.603 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93381' 01:29:42.603 Received shutdown signal, test time was about 2.000000 seconds 01:29:42.603 01:29:42.603 Latency(us) 01:29:42.603 [2024-12-09T05:24:34.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:42.603 [2024-12-09T05:24:34.220Z] =================================================================================================================== 01:29:42.603 [2024-12-09T05:24:34.220Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:29:42.603 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93381 01:29:42.603 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93381 01:29:42.861 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 01:29:42.861 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:29:42.861 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:29:42.861 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 01:29:42.861 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 01:29:42.861 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 01:29:42.861 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:29:42.861 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 01:29:42.861 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93472 01:29:42.861 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93472 /var/tmp/bperf.sock 01:29:42.861 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93472 ']' 01:29:42.861 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:29:42.861 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:29:42.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:29:42.861 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:29:42.861 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:29:42.861 05:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:29:42.861 I/O size of 131072 is greater than zero copy threshold (65536). 01:29:42.861 Zero copy mechanism will not be used. 01:29:42.861 [2024-12-09 05:24:34.320432] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:29:42.861 [2024-12-09 05:24:34.320480] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93472 ] 01:29:42.861 [2024-12-09 05:24:34.453212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:29:43.119 [2024-12-09 05:24:34.482327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:29:43.692 05:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:29:43.692 05:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:29:43.692 05:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:29:43.692 05:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:29:43.692 05:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:29:43.953 05:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:29:43.953 05:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:29:44.210 nvme0n1 01:29:44.210 05:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:29:44.210 05:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:29:44.210 I/O size of 131072 is greater than zero copy threshold (65536). 01:29:44.210 Zero copy mechanism will not be used. 01:29:44.210 Running I/O for 2 seconds... 01:29:46.515 12787.00 IOPS, 1598.38 MiB/s [2024-12-09T05:24:38.132Z] 13017.00 IOPS, 1627.12 MiB/s 01:29:46.515 Latency(us) 01:29:46.515 [2024-12-09T05:24:38.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:46.515 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 01:29:46.515 nvme0n1 : 2.00 13012.68 1626.59 0.00 0.00 1226.71 749.88 9074.22 01:29:46.515 [2024-12-09T05:24:38.132Z] =================================================================================================================== 01:29:46.515 [2024-12-09T05:24:38.132Z] Total : 13012.68 1626.59 0.00 0.00 1226.71 749.88 9074.22 01:29:46.515 { 01:29:46.515 "results": [ 01:29:46.515 { 01:29:46.515 "job": "nvme0n1", 01:29:46.515 "core_mask": "0x2", 01:29:46.515 "workload": "randwrite", 01:29:46.515 "status": "finished", 01:29:46.515 "queue_depth": 16, 01:29:46.515 "io_size": 131072, 01:29:46.515 "runtime": 2.002354, 01:29:46.515 "iops": 13012.68407084861, 01:29:46.515 "mibps": 1626.5855088560763, 01:29:46.515 "io_failed": 0, 01:29:46.515 "io_timeout": 0, 01:29:46.515 "avg_latency_us": 1226.7090602489313, 01:29:46.515 "min_latency_us": 749.8830769230769, 01:29:46.515 "max_latency_us": 9074.215384615385 01:29:46.515 } 01:29:46.515 ], 01:29:46.515 "core_count": 1 01:29:46.515 } 01:29:46.515 05:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:29:46.515 05:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:29:46.515 05:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:29:46.515 05:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:29:46.515 05:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:29:46.515 | select(.opcode=="crc32c") 01:29:46.515 | "\(.module_name) \(.executed)"' 01:29:46.515 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:29:46.515 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:29:46.515 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:29:46.515 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:29:46.516 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93472 01:29:46.516 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93472 ']' 01:29:46.516 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93472 01:29:46.516 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:29:46.516 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:29:46.516 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93472 01:29:46.516 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:29:46.516 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:29:46.516 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93472' 01:29:46.516 killing process with pid 93472 01:29:46.516 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93472 01:29:46.516 Received shutdown signal, test time was about 2.000000 seconds 01:29:46.516 01:29:46.516 Latency(us) 01:29:46.516 [2024-12-09T05:24:38.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:46.516 [2024-12-09T05:24:38.133Z] =================================================================================================================== 01:29:46.516 [2024-12-09T05:24:38.133Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:29:46.516 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93472 01:29:46.778 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 93166 01:29:46.778 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93166 ']' 01:29:46.778 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93166 01:29:46.778 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:29:46.778 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:29:46.778 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93166 01:29:46.778 killing process with pid 93166 01:29:46.779 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:29:46.779 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:29:46.779 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93166' 01:29:46.779 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93166 01:29:46.779 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93166 01:29:46.779 ************************************ 01:29:46.779 END TEST nvmf_digest_clean 01:29:46.779 ************************************ 01:29:46.779 01:29:46.779 real 0m16.588s 01:29:46.779 user 0m31.682s 01:29:46.779 sys 0m3.529s 01:29:46.779 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:46.779 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:29:47.036 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 01:29:47.036 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:29:47.036 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:47.036 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:29:47.036 ************************************ 01:29:47.036 START TEST nvmf_digest_error 01:29:47.036 ************************************ 01:29:47.036 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 01:29:47.036 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 01:29:47.036 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:29:47.036 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 01:29:47.036 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:29:47.036 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=93585 01:29:47.036 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 93585 01:29:47.036 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93585 ']' 01:29:47.036 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:29:47.036 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 01:29:47.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:29:47.036 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:29:47.036 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:29:47.036 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:29:47.036 05:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:29:47.036 [2024-12-09 05:24:38.450047] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:29:47.036 [2024-12-09 05:24:38.450098] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:29:47.036 [2024-12-09 05:24:38.584627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:29:47.036 [2024-12-09 05:24:38.613677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:29:47.036 [2024-12-09 05:24:38.613709] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:29:47.036 [2024-12-09 05:24:38.613715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:29:47.036 [2024-12-09 05:24:38.613719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:29:47.036 [2024-12-09 05:24:38.613723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:29:47.036 [2024-12-09 05:24:38.613920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:29:47.969 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:29:47.969 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:29:47.969 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:29:47.969 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 01:29:47.969 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:29:47.969 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:29:47.969 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 01:29:47.969 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:47.969 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:29:47.969 [2024-12-09 05:24:39.310208] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 01:29:47.969 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:47.969 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 01:29:47.969 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 01:29:47.969 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:47.969 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:29:47.969 null0 01:29:47.969 [2024-12-09 05:24:39.383271] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:29:47.969 [2024-12-09 05:24:39.407339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:29:47.969 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:47.969 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 01:29:47.969 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:29:47.969 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 01:29:47.969 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 01:29:47.969 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 01:29:47.969 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93625 01:29:47.970 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93625 /var/tmp/bperf.sock 01:29:47.970 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93625 ']' 01:29:47.970 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:29:47.970 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 01:29:47.970 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:29:47.970 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:29:47.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:29:47.970 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:29:47.970 05:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:29:47.970 [2024-12-09 05:24:39.447245] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:29:47.970 [2024-12-09 05:24:39.447291] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93625 ] 01:29:47.970 [2024-12-09 05:24:39.580154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:29:48.226 [2024-12-09 05:24:39.609619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:29:48.792 05:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:29:48.792 05:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:29:48.792 05:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:29:48.792 05:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:29:49.048 05:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:29:49.048 05:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:49.048 05:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:29:49.048 05:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:49.048 05:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:29:49.048 05:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:29:49.305 nvme0n1 01:29:49.305 05:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 01:29:49.305 05:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:49.305 05:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:29:49.305 05:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:49.305 05:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:29:49.305 05:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:29:49.305 Running I/O for 2 seconds... 01:29:49.305 [2024-12-09 05:24:40.886570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.305 [2024-12-09 05:24:40.886600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.305 [2024-12-09 05:24:40.886607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.305 [2024-12-09 05:24:40.896622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.305 [2024-12-09 05:24:40.896644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.305 [2024-12-09 05:24:40.896650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.305 [2024-12-09 05:24:40.907006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.306 [2024-12-09 05:24:40.907027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.306 [2024-12-09 05:24:40.907033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.306 [2024-12-09 05:24:40.915593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.306 [2024-12-09 05:24:40.915613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.306 [2024-12-09 05:24:40.915619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.563 [2024-12-09 05:24:40.925205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.563 [2024-12-09 05:24:40.925226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.563 [2024-12-09 05:24:40.925232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.563 [2024-12-09 05:24:40.935901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.563 [2024-12-09 05:24:40.935921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.563 [2024-12-09 05:24:40.935927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.563 [2024-12-09 05:24:40.945821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.563 [2024-12-09 05:24:40.945842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.563 [2024-12-09 05:24:40.945847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.563 [2024-12-09 05:24:40.956170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.563 [2024-12-09 05:24:40.956191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.563 [2024-12-09 05:24:40.956197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.563 [2024-12-09 05:24:40.966750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.563 [2024-12-09 05:24:40.966770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.563 [2024-12-09 05:24:40.966775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.563 [2024-12-09 05:24:40.975806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.563 [2024-12-09 05:24:40.975827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.563 [2024-12-09 05:24:40.975833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.563 [2024-12-09 05:24:40.986265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.563 [2024-12-09 05:24:40.986285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.563 [2024-12-09 05:24:40.986291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.563 [2024-12-09 05:24:40.996239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.563 [2024-12-09 05:24:40.996264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.563 [2024-12-09 05:24:40.996269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.563 [2024-12-09 05:24:41.006167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.563 [2024-12-09 05:24:41.006187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.563 [2024-12-09 05:24:41.006193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.563 [2024-12-09 05:24:41.016257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.564 [2024-12-09 05:24:41.016277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.564 [2024-12-09 05:24:41.016282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.564 [2024-12-09 05:24:41.026335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.564 [2024-12-09 05:24:41.026354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.564 [2024-12-09 05:24:41.026360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.564 [2024-12-09 05:24:41.036261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.564 [2024-12-09 05:24:41.036281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.564 [2024-12-09 05:24:41.036286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.564 [2024-12-09 05:24:41.046158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.564 [2024-12-09 05:24:41.046178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.564 [2024-12-09 05:24:41.046184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.564 [2024-12-09 05:24:41.056232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.564 [2024-12-09 05:24:41.056256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.564 [2024-12-09 05:24:41.056262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.564 [2024-12-09 05:24:41.066770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.564 [2024-12-09 05:24:41.066789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.564 [2024-12-09 05:24:41.066795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.564 [2024-12-09 05:24:41.077058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.564 [2024-12-09 05:24:41.077077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.564 [2024-12-09 05:24:41.077083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.564 [2024-12-09 05:24:41.087474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.564 [2024-12-09 05:24:41.087494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.564 [2024-12-09 05:24:41.087499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.564 [2024-12-09 05:24:41.096417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.564 [2024-12-09 05:24:41.096437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.564 [2024-12-09 05:24:41.096443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.564 [2024-12-09 05:24:41.105822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.564 [2024-12-09 05:24:41.105842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.564 [2024-12-09 05:24:41.105847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.564 [2024-12-09 05:24:41.115874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.564 [2024-12-09 05:24:41.115894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.564 [2024-12-09 05:24:41.115900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.564 [2024-12-09 05:24:41.127772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.564 [2024-12-09 05:24:41.127791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.564 [2024-12-09 05:24:41.127796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.564 [2024-12-09 05:24:41.137864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.564 [2024-12-09 05:24:41.137883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.564 [2024-12-09 05:24:41.137888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.564 [2024-12-09 05:24:41.147863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.564 [2024-12-09 05:24:41.147882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.564 [2024-12-09 05:24:41.147888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.564 [2024-12-09 05:24:41.157688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.564 [2024-12-09 05:24:41.157708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.564 [2024-12-09 05:24:41.157713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.564 [2024-12-09 05:24:41.166634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.564 [2024-12-09 05:24:41.166653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.564 [2024-12-09 05:24:41.166659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.564 [2024-12-09 05:24:41.176547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.564 [2024-12-09 05:24:41.176574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.564 [2024-12-09 05:24:41.176580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.821 [2024-12-09 05:24:41.187440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.821 [2024-12-09 05:24:41.187460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.821 [2024-12-09 05:24:41.187465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.821 [2024-12-09 05:24:41.197354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.821 [2024-12-09 05:24:41.197375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.821 [2024-12-09 05:24:41.197381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.821 [2024-12-09 05:24:41.207838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.821 [2024-12-09 05:24:41.207857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.821 [2024-12-09 05:24:41.207863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.821 [2024-12-09 05:24:41.217911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.821 [2024-12-09 05:24:41.217932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.821 [2024-12-09 05:24:41.217937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.821 [2024-12-09 05:24:41.228471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.821 [2024-12-09 05:24:41.228490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.822 [2024-12-09 05:24:41.228495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.822 [2024-12-09 05:24:41.238324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.822 [2024-12-09 05:24:41.238344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.822 [2024-12-09 05:24:41.238349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.822 [2024-12-09 05:24:41.248195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.822 [2024-12-09 05:24:41.248215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.822 [2024-12-09 05:24:41.248220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.822 [2024-12-09 05:24:41.258103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.822 [2024-12-09 05:24:41.258123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.822 [2024-12-09 05:24:41.258129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.822 [2024-12-09 05:24:41.268027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.822 [2024-12-09 05:24:41.268047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.822 [2024-12-09 05:24:41.268052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.822 [2024-12-09 05:24:41.278330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.822 [2024-12-09 05:24:41.278350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.822 [2024-12-09 05:24:41.278355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.822 [2024-12-09 05:24:41.288822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.822 [2024-12-09 05:24:41.288841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.822 [2024-12-09 05:24:41.288846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.822 [2024-12-09 05:24:41.298797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.822 [2024-12-09 05:24:41.298817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.822 [2024-12-09 05:24:41.298823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.822 [2024-12-09 05:24:41.308998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.822 [2024-12-09 05:24:41.309017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.822 [2024-12-09 05:24:41.309023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.822 [2024-12-09 05:24:41.319351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.822 [2024-12-09 05:24:41.319371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.822 [2024-12-09 05:24:41.319377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.822 [2024-12-09 05:24:41.329348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.822 [2024-12-09 05:24:41.329368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.822 [2024-12-09 05:24:41.329374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.822 [2024-12-09 05:24:41.339219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.822 [2024-12-09 05:24:41.339239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.822 [2024-12-09 05:24:41.339244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.822 [2024-12-09 05:24:41.349194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.822 [2024-12-09 05:24:41.349214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.822 [2024-12-09 05:24:41.349220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.822 [2024-12-09 05:24:41.357810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.822 [2024-12-09 05:24:41.357829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.822 [2024-12-09 05:24:41.357834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.822 [2024-12-09 05:24:41.367856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.822 [2024-12-09 05:24:41.367875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.822 [2024-12-09 05:24:41.367881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.822 [2024-12-09 05:24:41.377788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.822 [2024-12-09 05:24:41.377807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.822 [2024-12-09 05:24:41.377813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.822 [2024-12-09 05:24:41.388346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.822 [2024-12-09 05:24:41.388366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.822 [2024-12-09 05:24:41.388371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.822 [2024-12-09 05:24:41.398075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.822 [2024-12-09 05:24:41.398096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.822 [2024-12-09 05:24:41.398101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.822 [2024-12-09 05:24:41.408352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.822 [2024-12-09 05:24:41.408371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.822 [2024-12-09 05:24:41.408376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.822 [2024-12-09 05:24:41.418209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.822 [2024-12-09 05:24:41.418228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.822 [2024-12-09 05:24:41.418233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:49.822 [2024-12-09 05:24:41.427334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:49.822 [2024-12-09 05:24:41.427354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:49.822 [2024-12-09 05:24:41.427360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.080 [2024-12-09 05:24:41.437368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.080 [2024-12-09 05:24:41.437388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.080 [2024-12-09 05:24:41.437394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.080 [2024-12-09 05:24:41.447809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.080 [2024-12-09 05:24:41.447829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.080 [2024-12-09 05:24:41.447834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.080 [2024-12-09 05:24:41.458128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.080 [2024-12-09 05:24:41.458147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.080 [2024-12-09 05:24:41.458153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.080 [2024-12-09 05:24:41.468623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.080 [2024-12-09 05:24:41.468642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.080 [2024-12-09 05:24:41.468647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.080 [2024-12-09 05:24:41.478443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.080 [2024-12-09 05:24:41.478463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.080 [2024-12-09 05:24:41.478469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.080 [2024-12-09 05:24:41.489224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.080 [2024-12-09 05:24:41.489243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.080 [2024-12-09 05:24:41.489249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.080 [2024-12-09 05:24:41.499099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.080 [2024-12-09 05:24:41.499119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.080 [2024-12-09 05:24:41.499125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.080 [2024-12-09 05:24:41.509030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.080 [2024-12-09 05:24:41.509049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.080 [2024-12-09 05:24:41.509054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.080 [2024-12-09 05:24:41.519135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.080 [2024-12-09 05:24:41.519155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.080 [2024-12-09 05:24:41.519160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.080 [2024-12-09 05:24:41.529723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.080 [2024-12-09 05:24:41.529743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.080 [2024-12-09 05:24:41.529749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.080 [2024-12-09 05:24:41.539829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.080 [2024-12-09 05:24:41.539849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.080 [2024-12-09 05:24:41.539854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.080 [2024-12-09 05:24:41.548529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.080 [2024-12-09 05:24:41.548548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.080 [2024-12-09 05:24:41.548553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.080 [2024-12-09 05:24:41.558075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.080 [2024-12-09 05:24:41.558095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.080 [2024-12-09 05:24:41.558100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.080 [2024-12-09 05:24:41.568603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.080 [2024-12-09 05:24:41.568622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.080 [2024-12-09 05:24:41.568627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.080 [2024-12-09 05:24:41.578577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.080 [2024-12-09 05:24:41.578597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.080 [2024-12-09 05:24:41.578602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.080 [2024-12-09 05:24:41.588616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.080 [2024-12-09 05:24:41.588635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.080 [2024-12-09 05:24:41.588640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.081 [2024-12-09 05:24:41.598508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.081 [2024-12-09 05:24:41.598529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.081 [2024-12-09 05:24:41.598534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.081 [2024-12-09 05:24:41.608417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.081 [2024-12-09 05:24:41.608436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.081 [2024-12-09 05:24:41.608442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.081 [2024-12-09 05:24:41.619077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.081 [2024-12-09 05:24:41.619097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.081 [2024-12-09 05:24:41.619102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.081 [2024-12-09 05:24:41.628991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.081 [2024-12-09 05:24:41.629011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.081 [2024-12-09 05:24:41.629016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.081 [2024-12-09 05:24:41.639487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.081 [2024-12-09 05:24:41.639507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.081 [2024-12-09 05:24:41.639512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.081 [2024-12-09 05:24:41.650249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.081 [2024-12-09 05:24:41.650269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.081 [2024-12-09 05:24:41.650274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.081 [2024-12-09 05:24:41.660757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.081 [2024-12-09 05:24:41.660775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.081 [2024-12-09 05:24:41.660781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.081 [2024-12-09 05:24:41.670012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.081 [2024-12-09 05:24:41.670033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.081 [2024-12-09 05:24:41.670038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.081 [2024-12-09 05:24:41.679645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.081 [2024-12-09 05:24:41.679664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.081 [2024-12-09 05:24:41.679670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.081 [2024-12-09 05:24:41.689790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.081 [2024-12-09 05:24:41.689810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.081 [2024-12-09 05:24:41.689815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.699616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.699634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.699640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.710714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.710733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.710738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.719440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.719460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.719466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.729849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.729869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.729874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.739303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.739323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.739328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.749228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.749248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.749253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.759700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.759719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.759724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.770274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.770293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.770299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.780551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.780576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.780581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.790672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.790691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.790696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.799734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.799753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.799758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.811004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.811024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.811030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.821066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.821086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.821091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.831467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.831487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.831492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.841490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.841510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.841515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.850186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.850206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.850211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.861345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.861364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.861369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 25176.00 IOPS, 98.34 MiB/s [2024-12-09T05:24:41.956Z] [2024-12-09 05:24:41.870905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.870925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.870930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.882422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.882443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.882448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.892436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.892456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.892462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.902549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.902576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.902582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.911283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.911304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.911310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.921105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.921125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.921130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.931297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.931317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.931322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.941951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.941971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.941976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.339 [2024-12-09 05:24:41.951484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.339 [2024-12-09 05:24:41.951504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.339 [2024-12-09 05:24:41.951509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.597 [2024-12-09 05:24:41.961429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.597 [2024-12-09 05:24:41.961448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.597 [2024-12-09 05:24:41.961453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.597 [2024-12-09 05:24:41.971424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.597 [2024-12-09 05:24:41.971445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.597 [2024-12-09 05:24:41.971450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.597 [2024-12-09 05:24:41.981440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.597 [2024-12-09 05:24:41.981460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.597 [2024-12-09 05:24:41.981465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.597 [2024-12-09 05:24:41.991217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.597 [2024-12-09 05:24:41.991237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.597 [2024-12-09 05:24:41.991243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.597 [2024-12-09 05:24:42.000090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.597 [2024-12-09 05:24:42.000110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.597 [2024-12-09 05:24:42.000115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.597 [2024-12-09 05:24:42.010568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.597 [2024-12-09 05:24:42.010587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.597 [2024-12-09 05:24:42.010593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.597 [2024-12-09 05:24:42.020130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.597 [2024-12-09 05:24:42.020149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.597 [2024-12-09 05:24:42.020155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.597 [2024-12-09 05:24:42.030739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.597 [2024-12-09 05:24:42.030758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.597 [2024-12-09 05:24:42.030764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.597 [2024-12-09 05:24:42.040699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.597 [2024-12-09 05:24:42.040718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.597 [2024-12-09 05:24:42.040723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.597 [2024-12-09 05:24:42.050965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.597 [2024-12-09 05:24:42.050984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.597 [2024-12-09 05:24:42.050990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.597 [2024-12-09 05:24:42.061456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.597 [2024-12-09 05:24:42.061475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:29 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.598 [2024-12-09 05:24:42.061481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.598 [2024-12-09 05:24:42.071953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.598 [2024-12-09 05:24:42.071973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.598 [2024-12-09 05:24:42.071978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.598 [2024-12-09 05:24:42.082000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.598 [2024-12-09 05:24:42.082020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.598 [2024-12-09 05:24:42.082025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.598 [2024-12-09 05:24:42.090755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.598 [2024-12-09 05:24:42.090774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.598 [2024-12-09 05:24:42.090779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.598 [2024-12-09 05:24:42.100660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.598 [2024-12-09 05:24:42.100679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.598 [2024-12-09 05:24:42.100685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.598 [2024-12-09 05:24:42.109918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.598 [2024-12-09 05:24:42.109937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.598 [2024-12-09 05:24:42.109943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.598 [2024-12-09 05:24:42.121292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.598 [2024-12-09 05:24:42.121313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.598 [2024-12-09 05:24:42.121318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.598 [2024-12-09 05:24:42.131242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.598 [2024-12-09 05:24:42.131263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.598 [2024-12-09 05:24:42.131268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.598 [2024-12-09 05:24:42.141442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.598 [2024-12-09 05:24:42.141462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.598 [2024-12-09 05:24:42.141468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.598 [2024-12-09 05:24:42.151330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.598 [2024-12-09 05:24:42.151351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.598 [2024-12-09 05:24:42.151356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.598 [2024-12-09 05:24:42.161427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.598 [2024-12-09 05:24:42.161447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.598 [2024-12-09 05:24:42.161452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.598 [2024-12-09 05:24:42.171930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.598 [2024-12-09 05:24:42.171949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.598 [2024-12-09 05:24:42.171955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.598 [2024-12-09 05:24:42.182558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.598 [2024-12-09 05:24:42.182583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.598 [2024-12-09 05:24:42.182589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.598 [2024-12-09 05:24:42.193018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.598 [2024-12-09 05:24:42.193036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.598 [2024-12-09 05:24:42.193042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.598 [2024-12-09 05:24:42.203004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.598 [2024-12-09 05:24:42.203024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.598 [2024-12-09 05:24:42.203029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.856 [2024-12-09 05:24:42.213111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.213130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.213135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.221916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.221936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.221941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.231873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.231892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.231897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.242267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.242286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.242292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.252424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.252443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.252449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.262315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.262335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.262341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.273067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.273086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.273092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.282953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.282974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.282979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.292862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.292882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.292887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.302892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.302912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.302917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.312277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.312296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.312302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.323309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.323329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.323334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.333212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.333231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.333237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.343133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.343153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.343158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.353066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.353085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.353091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.363164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.363184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.363190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.373784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.373804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.373809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.383681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.383701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.383706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.393572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.393591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.393596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.403487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.403507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.403513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.414010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.414030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.414035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.424162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.424182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.424188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.434497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.434516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.434521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.444591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.444611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.444616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.454404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.454423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.454429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:50.857 [2024-12-09 05:24:42.464724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:50.857 [2024-12-09 05:24:42.464743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:50.857 [2024-12-09 05:24:42.464748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.474686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.474706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.474711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.484632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.484651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.484656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.494646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.494665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.494671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.505243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.505262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.505268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.514003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.514023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.514029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.523918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.523937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.523943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.533824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.533845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.533850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.544422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.544442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.544447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.554536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.554557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.554572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.564476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.564495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.564500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.574722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.574742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.574748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.585201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.585220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.585226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.595801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.595821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.595826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.605896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.605916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.605922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.616281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.616300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.616305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.626664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.626684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.626689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.635730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.635750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.635755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.646016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.646037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.646042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.656918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.656937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.656942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.665521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.665542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.665547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.675577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.675596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.675601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.686258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.686277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.686283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.696918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.696937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.696942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.707142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.707162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.707168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.717891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.717911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.717916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.120 [2024-12-09 05:24:42.727935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.120 [2024-12-09 05:24:42.727956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.120 [2024-12-09 05:24:42.727961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.379 [2024-12-09 05:24:42.737286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.379 [2024-12-09 05:24:42.737308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.379 [2024-12-09 05:24:42.737314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.379 [2024-12-09 05:24:42.747143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.379 [2024-12-09 05:24:42.747163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.379 [2024-12-09 05:24:42.747169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.379 [2024-12-09 05:24:42.757220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.379 [2024-12-09 05:24:42.757239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.379 [2024-12-09 05:24:42.757244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.379 [2024-12-09 05:24:42.767632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.379 [2024-12-09 05:24:42.767652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.379 [2024-12-09 05:24:42.767658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.379 [2024-12-09 05:24:42.777548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.379 [2024-12-09 05:24:42.777579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.379 [2024-12-09 05:24:42.777585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.379 [2024-12-09 05:24:42.787925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.379 [2024-12-09 05:24:42.787945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.379 [2024-12-09 05:24:42.787950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.379 [2024-12-09 05:24:42.797817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.379 [2024-12-09 05:24:42.797838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.379 [2024-12-09 05:24:42.797843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.379 [2024-12-09 05:24:42.807707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.379 [2024-12-09 05:24:42.807726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.379 [2024-12-09 05:24:42.807732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.379 [2024-12-09 05:24:42.817955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.379 [2024-12-09 05:24:42.817974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.379 [2024-12-09 05:24:42.817979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.379 [2024-12-09 05:24:42.828238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.379 [2024-12-09 05:24:42.828263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.379 [2024-12-09 05:24:42.828269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.379 [2024-12-09 05:24:42.838385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.379 [2024-12-09 05:24:42.838406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.379 [2024-12-09 05:24:42.838411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.379 [2024-12-09 05:24:42.848681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.379 [2024-12-09 05:24:42.848700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.379 [2024-12-09 05:24:42.848705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.379 [2024-12-09 05:24:42.858867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.379 [2024-12-09 05:24:42.858887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.379 [2024-12-09 05:24:42.858893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.379 25236.50 IOPS, 98.58 MiB/s [2024-12-09T05:24:42.996Z] [2024-12-09 05:24:42.870224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2315200) 01:29:51.379 [2024-12-09 05:24:42.870244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:51.379 [2024-12-09 05:24:42.870249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:51.379 01:29:51.379 Latency(us) 01:29:51.379 [2024-12-09T05:24:42.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:51.379 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:29:51.379 nvme0n1 : 2.00 25244.04 98.61 0.00 0.00 5065.77 2797.88 14518.74 01:29:51.379 [2024-12-09T05:24:42.996Z] =================================================================================================================== 01:29:51.379 [2024-12-09T05:24:42.996Z] Total : 25244.04 98.61 0.00 0.00 5065.77 2797.88 14518.74 01:29:51.379 { 01:29:51.379 "results": [ 01:29:51.379 { 01:29:51.379 "job": "nvme0n1", 01:29:51.379 "core_mask": "0x2", 01:29:51.379 "workload": "randread", 01:29:51.379 "status": "finished", 01:29:51.379 "queue_depth": 128, 01:29:51.379 "io_size": 4096, 01:29:51.379 "runtime": 2.004473, 01:29:51.379 "iops": 25244.041700736303, 01:29:51.379 "mibps": 98.60953789350118, 01:29:51.379 "io_failed": 0, 01:29:51.379 "io_timeout": 0, 01:29:51.379 "avg_latency_us": 5065.770666861252, 01:29:51.380 "min_latency_us": 2797.8830769230767, 01:29:51.380 "max_latency_us": 14518.744615384616 01:29:51.380 } 01:29:51.380 ], 01:29:51.380 "core_count": 1 01:29:51.380 } 01:29:51.380 05:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:29:51.380 05:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:29:51.380 05:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:29:51.380 | .driver_specific 01:29:51.380 | .nvme_error 01:29:51.380 | .status_code 01:29:51.380 | .command_transient_transport_error' 01:29:51.380 05:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:29:51.638 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 198 > 0 )) 01:29:51.638 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93625 01:29:51.638 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93625 ']' 01:29:51.638 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93625 01:29:51.638 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:29:51.638 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:29:51.638 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93625 01:29:51.638 killing process with pid 93625 01:29:51.638 Received shutdown signal, test time was about 2.000000 seconds 01:29:51.638 01:29:51.638 Latency(us) 01:29:51.638 [2024-12-09T05:24:43.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:51.638 [2024-12-09T05:24:43.255Z] =================================================================================================================== 01:29:51.638 [2024-12-09T05:24:43.255Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:29:51.638 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:29:51.638 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:29:51.638 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93625' 01:29:51.638 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93625 01:29:51.638 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93625 01:29:51.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:29:51.896 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 01:29:51.896 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:29:51.896 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 01:29:51.896 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 01:29:51.896 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 01:29:51.896 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93715 01:29:51.896 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 01:29:51.896 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93715 /var/tmp/bperf.sock 01:29:51.896 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93715 ']' 01:29:51.896 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:29:51.896 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:29:51.896 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:29:51.896 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:29:51.896 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:29:51.896 I/O size of 131072 is greater than zero copy threshold (65536). 01:29:51.896 Zero copy mechanism will not be used. 01:29:51.896 [2024-12-09 05:24:43.287029] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:29:51.896 [2024-12-09 05:24:43.287077] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93715 ] 01:29:51.896 [2024-12-09 05:24:43.410841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:29:51.896 [2024-12-09 05:24:43.439999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:29:51.897 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:29:51.897 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:29:51.897 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:29:51.897 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:29:52.154 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:29:52.154 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:52.154 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:29:52.154 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:52.154 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:29:52.154 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:29:52.411 nvme0n1 01:29:52.411 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 01:29:52.411 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:52.411 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:29:52.411 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:52.411 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:29:52.411 05:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:29:52.672 I/O size of 131072 is greater than zero copy threshold (65536). 01:29:52.672 Zero copy mechanism will not be used. 01:29:52.672 Running I/O for 2 seconds... 01:29:52.672 [2024-12-09 05:24:44.078678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.672 [2024-12-09 05:24:44.078712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.672 [2024-12-09 05:24:44.078720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.672 [2024-12-09 05:24:44.081672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.672 [2024-12-09 05:24:44.081696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.672 [2024-12-09 05:24:44.081702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.672 [2024-12-09 05:24:44.084110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.672 [2024-12-09 05:24:44.084131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.672 [2024-12-09 05:24:44.084137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.672 [2024-12-09 05:24:44.086237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.672 [2024-12-09 05:24:44.086259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.672 [2024-12-09 05:24:44.086265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.672 [2024-12-09 05:24:44.088788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.672 [2024-12-09 05:24:44.088811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.672 [2024-12-09 05:24:44.088817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.672 [2024-12-09 05:24:44.090771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.672 [2024-12-09 05:24:44.090792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.672 [2024-12-09 05:24:44.090798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.672 [2024-12-09 05:24:44.093454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.672 [2024-12-09 05:24:44.093477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.672 [2024-12-09 05:24:44.093483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.672 [2024-12-09 05:24:44.096271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.672 [2024-12-09 05:24:44.096294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.672 [2024-12-09 05:24:44.096299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.672 [2024-12-09 05:24:44.098158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.672 [2024-12-09 05:24:44.098179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.672 [2024-12-09 05:24:44.098184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.672 [2024-12-09 05:24:44.100969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.672 [2024-12-09 05:24:44.100992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.672 [2024-12-09 05:24:44.100997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.672 [2024-12-09 05:24:44.103003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.672 [2024-12-09 05:24:44.103024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.672 [2024-12-09 05:24:44.103029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.672 [2024-12-09 05:24:44.105445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.672 [2024-12-09 05:24:44.105469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.672 [2024-12-09 05:24:44.105475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.672 [2024-12-09 05:24:44.107473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.672 [2024-12-09 05:24:44.107493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.672 [2024-12-09 05:24:44.107499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.672 [2024-12-09 05:24:44.109891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.672 [2024-12-09 05:24:44.109913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.672 [2024-12-09 05:24:44.109918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.672 [2024-12-09 05:24:44.112744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.672 [2024-12-09 05:24:44.112766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.672 [2024-12-09 05:24:44.112771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.672 [2024-12-09 05:24:44.115412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.672 [2024-12-09 05:24:44.115433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.672 [2024-12-09 05:24:44.115438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.672 [2024-12-09 05:24:44.117349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.672 [2024-12-09 05:24:44.117370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.672 [2024-12-09 05:24:44.117376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.672 [2024-12-09 05:24:44.119537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.672 [2024-12-09 05:24:44.119557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.672 [2024-12-09 05:24:44.119573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.672 [2024-12-09 05:24:44.121595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.672 [2024-12-09 05:24:44.121616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.672 [2024-12-09 05:24:44.121621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.672 [2024-12-09 05:24:44.123828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.672 [2024-12-09 05:24:44.123849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.672 [2024-12-09 05:24:44.123855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.672 [2024-12-09 05:24:44.125969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.125990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.125995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.127963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.127983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.127989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.130159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.130180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.130185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.132224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.132253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.132258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.134426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.134446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.134452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.136373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.136395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.136400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.138442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.138463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.138469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.140522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.140543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.140548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.142768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.142788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.142794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.144589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.144609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.144614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.147058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.147079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.147084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.149126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.149148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.149153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.151468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.151489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.151494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.154029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.154049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.154055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.155903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.155923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.155928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.158594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.158615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.158620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.160648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.160668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.160673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.163073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.163094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.163100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.165794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.165817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.165822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.167839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.167858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.167863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.170275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.170298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.170303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.173247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.173270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.173275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.176195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.176216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.176222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.178942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.178963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.178969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.181858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.181880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.181885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.184746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.673 [2024-12-09 05:24:44.184768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.673 [2024-12-09 05:24:44.184774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.673 [2024-12-09 05:24:44.187592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.187612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.187617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.190445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.190467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.190472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.193365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.193387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.193392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.196184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.196206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.196211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.199118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.199140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.199145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.202043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.202065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.202070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.204923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.204945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.204950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.207790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.207811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.207816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.210615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.210636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.210641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.213518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.213540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.213545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.216368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.216389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.216394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.219195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.219217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.219222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.222100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.222122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.222127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.224967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.224989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.224994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.227831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.227852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.227857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.230643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.230664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.230670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.233557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.233589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.233594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.236353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.236375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.236380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.239231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.239252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.239258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.242128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.242151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.242156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.245004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.245027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.245032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.247888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.247909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.247915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.250747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.250768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.250773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.253620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.253641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.253646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.256512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.256534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.256540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.259454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.259475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.674 [2024-12-09 05:24:44.259481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.674 [2024-12-09 05:24:44.262378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.674 [2024-12-09 05:24:44.262401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.675 [2024-12-09 05:24:44.262406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.675 [2024-12-09 05:24:44.265233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.675 [2024-12-09 05:24:44.265255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.675 [2024-12-09 05:24:44.265261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.675 [2024-12-09 05:24:44.268095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.675 [2024-12-09 05:24:44.268116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.675 [2024-12-09 05:24:44.268122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.675 [2024-12-09 05:24:44.271004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.675 [2024-12-09 05:24:44.271026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.675 [2024-12-09 05:24:44.271031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.675 [2024-12-09 05:24:44.273922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.675 [2024-12-09 05:24:44.273944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.675 [2024-12-09 05:24:44.273949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.675 [2024-12-09 05:24:44.276826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.675 [2024-12-09 05:24:44.276847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.675 [2024-12-09 05:24:44.276852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.675 [2024-12-09 05:24:44.279700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.675 [2024-12-09 05:24:44.279720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.675 [2024-12-09 05:24:44.279726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.675 [2024-12-09 05:24:44.282624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.675 [2024-12-09 05:24:44.282645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.675 [2024-12-09 05:24:44.282650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.935 [2024-12-09 05:24:44.285511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.935 [2024-12-09 05:24:44.285534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.935 [2024-12-09 05:24:44.285539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.935 [2024-12-09 05:24:44.288354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.935 [2024-12-09 05:24:44.288375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.935 [2024-12-09 05:24:44.288381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.935 [2024-12-09 05:24:44.291241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.291262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.291267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.294134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.294158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.294163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.296996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.297017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.297023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.299806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.299827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.299832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.302679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.302701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.302706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.305576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.305596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.305601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.308384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.308405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.308411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.311251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.311272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.311277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.314152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.314173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.314179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.317009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.317031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.317036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.319810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.319832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.319837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.322669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.322690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.322696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.325518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.325540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.325546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.328408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.328429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.328435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.331311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.331332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.331337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.334215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.334236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.334241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.337078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.337099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.337104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.340029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.340050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.340055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.342948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.342970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.342975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.345858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.345881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.345886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.348710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.348731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.348737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.351589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.351609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.351614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.354490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.354512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.354517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.357332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.357354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.357359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.360255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.360275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.360280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.363121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.363142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.363148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.936 [2024-12-09 05:24:44.365975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.936 [2024-12-09 05:24:44.365997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.936 [2024-12-09 05:24:44.366003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.368887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.368909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.368914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.371767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.371787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.371793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.374667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.374688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.374694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.377524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.377546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.377551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.380455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.380477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.380483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.383392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.383413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.383418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.386324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.386347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.386352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.389192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.389214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.389220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.392027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.392047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.392052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.395037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.395058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.395064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.397826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.397847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.397853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.400690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.400711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.400716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.403505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.403526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.403532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.406376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.406398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.406404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.409283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.409306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.409311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.412193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.412214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.412220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.415027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.415048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.415054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.417921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.417943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.417948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.420791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.420812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.420818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.423692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.423713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.423719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.426612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.426633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.426638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.429496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.429518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.429524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.432501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.432522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.432527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.435433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.435454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.435460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.937 [2024-12-09 05:24:44.438310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.937 [2024-12-09 05:24:44.438332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.937 [2024-12-09 05:24:44.438337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.441133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.441155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.441161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.443913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.443933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.443939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.446740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.446763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.446768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.449524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.449546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.449552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.452264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.452283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.452289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.455160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.455181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.455187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.458062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.458084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.458089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.460842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.460865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.460870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.463505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.463526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.463531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.466282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.466305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.466310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.469221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.469243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.469249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.472100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.472121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.472127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.475001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.475023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.475028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.477912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.477934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.477940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.480805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.480827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.480832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.483614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.483634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.483639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.486477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.486499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.486505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.489300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.489322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.489327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.492152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.492173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.492179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.495034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.495055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.495061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.497816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.497839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.497844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.500690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.500711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.500716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.503555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.503584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.503590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.506408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.506430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.506436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.509231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.509253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.938 [2024-12-09 05:24:44.509259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.938 [2024-12-09 05:24:44.512083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.938 [2024-12-09 05:24:44.512104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.939 [2024-12-09 05:24:44.512109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.939 [2024-12-09 05:24:44.514934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.939 [2024-12-09 05:24:44.514955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.939 [2024-12-09 05:24:44.514961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.939 [2024-12-09 05:24:44.517847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.939 [2024-12-09 05:24:44.517869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.939 [2024-12-09 05:24:44.517875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.939 [2024-12-09 05:24:44.520711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.939 [2024-12-09 05:24:44.520733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.939 [2024-12-09 05:24:44.520738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.939 [2024-12-09 05:24:44.523579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.939 [2024-12-09 05:24:44.523599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.939 [2024-12-09 05:24:44.523604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.939 [2024-12-09 05:24:44.526461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.939 [2024-12-09 05:24:44.526483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.939 [2024-12-09 05:24:44.526488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.939 [2024-12-09 05:24:44.529323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.939 [2024-12-09 05:24:44.529345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.939 [2024-12-09 05:24:44.529351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.939 [2024-12-09 05:24:44.532140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.939 [2024-12-09 05:24:44.532161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.939 [2024-12-09 05:24:44.532166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.939 [2024-12-09 05:24:44.534903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.939 [2024-12-09 05:24:44.534924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.939 [2024-12-09 05:24:44.534930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:52.939 [2024-12-09 05:24:44.537791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.939 [2024-12-09 05:24:44.537813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.939 [2024-12-09 05:24:44.537818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:52.939 [2024-12-09 05:24:44.540713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.939 [2024-12-09 05:24:44.540734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.939 [2024-12-09 05:24:44.540739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:52.939 [2024-12-09 05:24:44.543606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.939 [2024-12-09 05:24:44.543626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.939 [2024-12-09 05:24:44.543631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:52.939 [2024-12-09 05:24:44.546475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:52.939 [2024-12-09 05:24:44.546497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:52.939 [2024-12-09 05:24:44.546502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.200 [2024-12-09 05:24:44.549375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.200 [2024-12-09 05:24:44.549398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.200 [2024-12-09 05:24:44.549403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.200 [2024-12-09 05:24:44.552285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.200 [2024-12-09 05:24:44.552308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.200 [2024-12-09 05:24:44.552313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.200 [2024-12-09 05:24:44.555148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.200 [2024-12-09 05:24:44.555170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.200 [2024-12-09 05:24:44.555176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.200 [2024-12-09 05:24:44.557985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.200 [2024-12-09 05:24:44.558007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.200 [2024-12-09 05:24:44.558012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.200 [2024-12-09 05:24:44.560862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.200 [2024-12-09 05:24:44.560884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.200 [2024-12-09 05:24:44.560890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.200 [2024-12-09 05:24:44.563768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.200 [2024-12-09 05:24:44.563789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.200 [2024-12-09 05:24:44.563795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.200 [2024-12-09 05:24:44.566703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.200 [2024-12-09 05:24:44.566724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.200 [2024-12-09 05:24:44.566730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.200 [2024-12-09 05:24:44.569610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.200 [2024-12-09 05:24:44.569631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.200 [2024-12-09 05:24:44.569637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.200 [2024-12-09 05:24:44.572503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.200 [2024-12-09 05:24:44.572525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.200 [2024-12-09 05:24:44.572531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.200 [2024-12-09 05:24:44.575358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.200 [2024-12-09 05:24:44.575379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.200 [2024-12-09 05:24:44.575384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.578241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.578263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.578269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.581122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.581144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.581150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.583943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.583964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.583969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.586813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.586834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.586840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.589664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.589686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.589691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.592481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.592502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.592508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.594589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.594608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.594614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.596969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.596991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.596996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.599523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.599545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.599550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.601375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.601397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.601402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.604288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.604308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.604314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.606931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.606952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.606958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.608822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.608842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.608848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.611578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.611597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.611602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.614268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.614290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.614295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.616131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.616151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.616157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.618661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.618680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.618686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.620730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.620750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.620756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.623229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.623251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.623257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.625102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.625123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.625128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.627430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.627451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.627457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.629754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.629775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.629780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.631625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.631645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.631650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.633775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.633796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.633802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.201 [2024-12-09 05:24:44.636453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.201 [2024-12-09 05:24:44.636475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.201 [2024-12-09 05:24:44.636480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.638292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.638313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.638318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.640991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.641013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.641019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.642952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.642972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.642978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.645392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.645415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.645421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.647914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.647935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.647941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.649820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.649841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.649846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.652252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.652271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.652277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.655054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.655075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.655081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.657149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.657170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.657176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.659607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.659627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.659632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.662116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.662138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.662144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.664195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.664217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.664222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.666222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.666243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.666249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.668197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.668217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.668222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.670416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.670436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.670442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.672714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.672734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.672740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.674875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.674896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.674901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.677024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.677045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.677050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.679348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.679368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.679374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.681529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.681550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.681556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.683432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.683452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.683457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.686011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.686033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.686038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.688971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.688993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.688998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.691885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.691907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.691912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.694053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.694073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.694079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.696440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.696461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.202 [2024-12-09 05:24:44.696466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.202 [2024-12-09 05:24:44.699282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.202 [2024-12-09 05:24:44.699303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.699308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.702162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.702184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.702189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.705055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.705075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.705081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.707920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.707941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.707947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.710779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.710801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.710807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.713770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.713791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.713797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.716638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.716658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.716663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.719524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.719545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.719551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.722355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.722377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.722383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.725171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.725193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.725198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.728061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.728082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.728088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.730998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.731020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.731025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.733915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.733936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.733942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.736798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.736819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.736824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.739688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.739707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.739712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.742592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.742613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.742618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.745325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.745345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.745351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.748070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.748091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.748096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.750947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.750968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.750974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.753833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.753854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.753860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.756752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.756772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.756778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.759675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.759694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.759699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.762633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.762653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.762658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.765485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.765507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.765513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.768407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.768428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.768434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.771296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.771317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.771322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.774197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.774220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.203 [2024-12-09 05:24:44.774225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.203 [2024-12-09 05:24:44.777073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.203 [2024-12-09 05:24:44.777095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.204 [2024-12-09 05:24:44.777100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.204 [2024-12-09 05:24:44.779950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.204 [2024-12-09 05:24:44.779971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.204 [2024-12-09 05:24:44.779976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.204 [2024-12-09 05:24:44.782872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.204 [2024-12-09 05:24:44.782895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.204 [2024-12-09 05:24:44.782900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.204 [2024-12-09 05:24:44.785650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.204 [2024-12-09 05:24:44.785670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.204 [2024-12-09 05:24:44.785675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.204 [2024-12-09 05:24:44.788528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.204 [2024-12-09 05:24:44.788550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.204 [2024-12-09 05:24:44.788555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.204 [2024-12-09 05:24:44.791425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.204 [2024-12-09 05:24:44.791446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.204 [2024-12-09 05:24:44.791451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.204 [2024-12-09 05:24:44.794335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.204 [2024-12-09 05:24:44.794357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.204 [2024-12-09 05:24:44.794363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.204 [2024-12-09 05:24:44.797254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.204 [2024-12-09 05:24:44.797277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.204 [2024-12-09 05:24:44.797282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.204 [2024-12-09 05:24:44.800179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.204 [2024-12-09 05:24:44.800200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.204 [2024-12-09 05:24:44.800206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.204 [2024-12-09 05:24:44.803020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.204 [2024-12-09 05:24:44.803041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.204 [2024-12-09 05:24:44.803046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.204 [2024-12-09 05:24:44.805800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.204 [2024-12-09 05:24:44.805822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.204 [2024-12-09 05:24:44.805828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.204 [2024-12-09 05:24:44.808691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.204 [2024-12-09 05:24:44.808711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.204 [2024-12-09 05:24:44.808717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.204 [2024-12-09 05:24:44.811502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.204 [2024-12-09 05:24:44.811522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.204 [2024-12-09 05:24:44.811528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.465 [2024-12-09 05:24:44.814359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.465 [2024-12-09 05:24:44.814382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.465 [2024-12-09 05:24:44.814387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.465 [2024-12-09 05:24:44.817183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.465 [2024-12-09 05:24:44.817205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.465 [2024-12-09 05:24:44.817211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.465 [2024-12-09 05:24:44.820031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.465 [2024-12-09 05:24:44.820052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.465 [2024-12-09 05:24:44.820057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.465 [2024-12-09 05:24:44.822966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.465 [2024-12-09 05:24:44.822987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.465 [2024-12-09 05:24:44.822992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.465 [2024-12-09 05:24:44.825850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.465 [2024-12-09 05:24:44.825872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.465 [2024-12-09 05:24:44.825877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.465 [2024-12-09 05:24:44.828646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.465 [2024-12-09 05:24:44.828666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.465 [2024-12-09 05:24:44.828671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.465 [2024-12-09 05:24:44.831489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.465 [2024-12-09 05:24:44.831510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.465 [2024-12-09 05:24:44.831515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.465 [2024-12-09 05:24:44.834409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.465 [2024-12-09 05:24:44.834431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.465 [2024-12-09 05:24:44.834437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.465 [2024-12-09 05:24:44.837282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.465 [2024-12-09 05:24:44.837304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.465 [2024-12-09 05:24:44.837310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.465 [2024-12-09 05:24:44.840187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.465 [2024-12-09 05:24:44.840209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.465 [2024-12-09 05:24:44.840214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.465 [2024-12-09 05:24:44.843029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.465 [2024-12-09 05:24:44.843050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.465 [2024-12-09 05:24:44.843055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.465 [2024-12-09 05:24:44.845906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.465 [2024-12-09 05:24:44.845929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.465 [2024-12-09 05:24:44.845934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.465 [2024-12-09 05:24:44.848811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.465 [2024-12-09 05:24:44.848832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.465 [2024-12-09 05:24:44.848837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.465 [2024-12-09 05:24:44.851542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.465 [2024-12-09 05:24:44.851573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.465 [2024-12-09 05:24:44.851578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.465 [2024-12-09 05:24:44.854327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.465 [2024-12-09 05:24:44.854349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.465 [2024-12-09 05:24:44.854354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.465 [2024-12-09 05:24:44.857139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.465 [2024-12-09 05:24:44.857160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.465 [2024-12-09 05:24:44.857166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.465 [2024-12-09 05:24:44.859885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.465 [2024-12-09 05:24:44.859907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.465 [2024-12-09 05:24:44.859912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.465 [2024-12-09 05:24:44.862781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.465 [2024-12-09 05:24:44.862803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.465 [2024-12-09 05:24:44.862808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.465 [2024-12-09 05:24:44.865669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.465 [2024-12-09 05:24:44.865690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.465 [2024-12-09 05:24:44.865695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.465 [2024-12-09 05:24:44.868512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.465 [2024-12-09 05:24:44.868534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.868539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.871393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.871415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.871420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.874288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.874311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.874317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.877163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.877185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.877190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.879995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.880016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.880021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.882729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.882749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.882754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.885604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.885624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.885630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.888446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.888467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.888472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.891360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.891381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.891387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.894262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.894284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.894290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.897147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.897169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.897174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.900031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.900052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.900058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.902887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.902907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.902912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.905635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.905655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.905660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.908555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.908585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.908590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.911426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.911448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.911453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.914310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.914332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.914338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.917029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.917051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.917056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.919897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.919918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.919923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.922804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.922825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.922830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.925715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.925737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.925742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.928609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.928629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.928634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.931643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.931662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.931668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.934379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.934400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.934406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.937151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.937174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.937180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.940020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.940041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.940046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.942906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.942928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.466 [2024-12-09 05:24:44.942933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.466 [2024-12-09 05:24:44.945701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.466 [2024-12-09 05:24:44.945721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:44.945726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:44.948591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:44.948611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:44.948617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:44.951458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:44.951478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:44.951484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:44.954357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:44.954379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:44.954384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:44.957283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:44.957304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:44.957310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:44.960222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:44.960249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:44.960255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:44.963125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:44.963146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:44.963152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:44.965951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:44.965973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:44.965979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:44.968683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:44.968703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:44.968708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:44.971445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:44.971466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:44.971471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:44.974321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:44.974343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:44.974348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:44.977240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:44.977262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:44.977267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:44.980104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:44.980126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:44.980131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:44.982990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:44.983011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:44.983016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:44.985791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:44.985812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:44.985818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:44.988545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:44.988575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:44.988580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:44.991354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:44.991374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:44.991380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:44.994241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:44.994264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:44.994270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:44.997148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:44.997171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:44.997176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:45.000014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:45.000035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:45.000040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:45.002839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:45.002860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:45.002865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:45.005725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:45.005746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:45.005752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:45.008570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:45.008590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:45.008602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:45.011300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:45.011321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:45.011326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:45.014193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:45.014215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:45.014221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:45.017114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:45.017136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:45.017141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:45.020023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:45.020044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.467 [2024-12-09 05:24:45.020050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.467 [2024-12-09 05:24:45.022872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.467 [2024-12-09 05:24:45.022894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.468 [2024-12-09 05:24:45.022899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.468 [2024-12-09 05:24:45.025766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.468 [2024-12-09 05:24:45.025787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.468 [2024-12-09 05:24:45.025793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.468 [2024-12-09 05:24:45.028646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.468 [2024-12-09 05:24:45.028667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.468 [2024-12-09 05:24:45.028672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.468 [2024-12-09 05:24:45.031630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.468 [2024-12-09 05:24:45.031649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.468 [2024-12-09 05:24:45.031654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.468 [2024-12-09 05:24:45.034437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.468 [2024-12-09 05:24:45.034459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.468 [2024-12-09 05:24:45.034464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.468 [2024-12-09 05:24:45.037317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.468 [2024-12-09 05:24:45.037340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.468 [2024-12-09 05:24:45.037345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.468 [2024-12-09 05:24:45.040210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.468 [2024-12-09 05:24:45.040232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.468 [2024-12-09 05:24:45.040237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.468 [2024-12-09 05:24:45.043103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.468 [2024-12-09 05:24:45.043124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.468 [2024-12-09 05:24:45.043129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.468 [2024-12-09 05:24:45.045940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.468 [2024-12-09 05:24:45.045962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.468 [2024-12-09 05:24:45.045967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.468 [2024-12-09 05:24:45.048847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.468 [2024-12-09 05:24:45.048868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.468 [2024-12-09 05:24:45.048873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.468 [2024-12-09 05:24:45.051711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.468 [2024-12-09 05:24:45.051731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.468 [2024-12-09 05:24:45.051736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.468 [2024-12-09 05:24:45.054583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.468 [2024-12-09 05:24:45.054604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.468 [2024-12-09 05:24:45.054609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.468 [2024-12-09 05:24:45.057308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.468 [2024-12-09 05:24:45.057331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.468 [2024-12-09 05:24:45.057336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.468 [2024-12-09 05:24:45.060007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.468 [2024-12-09 05:24:45.060029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.468 [2024-12-09 05:24:45.060034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.468 [2024-12-09 05:24:45.062870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.468 [2024-12-09 05:24:45.062892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.468 [2024-12-09 05:24:45.062898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.468 [2024-12-09 05:24:45.065724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.468 [2024-12-09 05:24:45.065746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.468 [2024-12-09 05:24:45.065751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.468 [2024-12-09 05:24:45.068630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.468 [2024-12-09 05:24:45.068649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.468 [2024-12-09 05:24:45.068655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.468 11224.00 IOPS, 1403.00 MiB/s [2024-12-09T05:24:45.085Z] [2024-12-09 05:24:45.072802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.468 [2024-12-09 05:24:45.072823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.468 [2024-12-09 05:24:45.072828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.468 [2024-12-09 05:24:45.075726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.468 [2024-12-09 05:24:45.075746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.468 [2024-12-09 05:24:45.075752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.729 [2024-12-09 05:24:45.078626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.729 [2024-12-09 05:24:45.078646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.729 [2024-12-09 05:24:45.078652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.729 [2024-12-09 05:24:45.081495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.729 [2024-12-09 05:24:45.081518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.729 [2024-12-09 05:24:45.081523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.729 [2024-12-09 05:24:45.084480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.729 [2024-12-09 05:24:45.084501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.729 [2024-12-09 05:24:45.084506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.729 [2024-12-09 05:24:45.087100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.729 [2024-12-09 05:24:45.087122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.729 [2024-12-09 05:24:45.087127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.729 [2024-12-09 05:24:45.089953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.729 [2024-12-09 05:24:45.089975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.729 [2024-12-09 05:24:45.089981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.729 [2024-12-09 05:24:45.092855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.729 [2024-12-09 05:24:45.092877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.729 [2024-12-09 05:24:45.092882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.729 [2024-12-09 05:24:45.095714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.729 [2024-12-09 05:24:45.095735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.729 [2024-12-09 05:24:45.095740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.729 [2024-12-09 05:24:45.098542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.729 [2024-12-09 05:24:45.098574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.729 [2024-12-09 05:24:45.098580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.729 [2024-12-09 05:24:45.101448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.729 [2024-12-09 05:24:45.101470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.729 [2024-12-09 05:24:45.101476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.729 [2024-12-09 05:24:45.104341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.729 [2024-12-09 05:24:45.104362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.729 [2024-12-09 05:24:45.104367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.729 [2024-12-09 05:24:45.107119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.729 [2024-12-09 05:24:45.107140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.729 [2024-12-09 05:24:45.107145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.729 [2024-12-09 05:24:45.110027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.729 [2024-12-09 05:24:45.110049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.729 [2024-12-09 05:24:45.110054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.729 [2024-12-09 05:24:45.113080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.729 [2024-12-09 05:24:45.113103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.729 [2024-12-09 05:24:45.113109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.729 [2024-12-09 05:24:45.115938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.729 [2024-12-09 05:24:45.115959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.729 [2024-12-09 05:24:45.115965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.729 [2024-12-09 05:24:45.118884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.729 [2024-12-09 05:24:45.118906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.729 [2024-12-09 05:24:45.118912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.729 [2024-12-09 05:24:45.121647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.729 [2024-12-09 05:24:45.121667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.729 [2024-12-09 05:24:45.121673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.124525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.124547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.124552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.127413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.127434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.127440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.130330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.130353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.130358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.133275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.133296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.133302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.136194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.136215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.136221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.139100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.139121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.139127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.141899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.141921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.141926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.144809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.144831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.144836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.147658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.147677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.147682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.150556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.150593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.150598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.153502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.153525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.153530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.156375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.156396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.156401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.159219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.159240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.159245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.162013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.162035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.162040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.164895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.164917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.164922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.167747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.167767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.167772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.170661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.170682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.170688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.173577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.173597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.173602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.176278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.176298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.176303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.179020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.179042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.179048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.181935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.181957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.181962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.184810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.184832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.184837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.187664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.187684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.187689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.190585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.190606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.190611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.193494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.193516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.193522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.196358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.730 [2024-12-09 05:24:45.196379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.730 [2024-12-09 05:24:45.196385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.730 [2024-12-09 05:24:45.199259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.199280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.199285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.202038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.202061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.202066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.204888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.204910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.204916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.207777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.207798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.207803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.210640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.210661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.210666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.213525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.213547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.213552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.216370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.216391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.216397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.219274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.219295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.219301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.222111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.222133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.222138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.224974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.224996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.225001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.227817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.227837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.227843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.230692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.230712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.230717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.233593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.233613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.233619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.236476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.236498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.236503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.239262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.239283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.239288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.242155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.242177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.242182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.245031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.245053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.245058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.247872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.247893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.247899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.250716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.250737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.250742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.253613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.253634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.253639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.256434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.256456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.256461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.259304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.259326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.259331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.262156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.262179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.262184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.265029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.265051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.265056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.267897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.267919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.267924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.270798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.270820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.270826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.273687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.731 [2024-12-09 05:24:45.273708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.731 [2024-12-09 05:24:45.273714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.731 [2024-12-09 05:24:45.276531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.276553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.276558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.279457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.279478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.279484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.282388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.282411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.282417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.285300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.285323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.285329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.288194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.288215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.288220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.291143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.291164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.291169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.294001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.294022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.294027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.296798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.296818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.296824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.299571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.299585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.299590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.302484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.302506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.302511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.305380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.305402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.305408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.308267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.308286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.308292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.311010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.311029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.311035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.313778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.313799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.313805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.316811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.316833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.316839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.319637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.319656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.319662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.322417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.322438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.322443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.325180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.325203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.325208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.328020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.328041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.328046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.330902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.330923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.330929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.333620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.333641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.333647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.336507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.336528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.336534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.339277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.339299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.339304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.732 [2024-12-09 05:24:45.342071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.732 [2024-12-09 05:24:45.342093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.732 [2024-12-09 05:24:45.342098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.992 [2024-12-09 05:24:45.344972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.992 [2024-12-09 05:24:45.344994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.992 [2024-12-09 05:24:45.344999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.992 [2024-12-09 05:24:45.347822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.992 [2024-12-09 05:24:45.347843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.992 [2024-12-09 05:24:45.347848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.992 [2024-12-09 05:24:45.350745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.992 [2024-12-09 05:24:45.350766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.992 [2024-12-09 05:24:45.350771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.992 [2024-12-09 05:24:45.353634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.992 [2024-12-09 05:24:45.353654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.992 [2024-12-09 05:24:45.353659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.992 [2024-12-09 05:24:45.356474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.992 [2024-12-09 05:24:45.356496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.356501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.359326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.359347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.359352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.362206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.362228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.362234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.365125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.365148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.365153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.367992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.368014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.368019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.370804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.370825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.370831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.373670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.373691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.373697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.376602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.376622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.376627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.379164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.379185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.379190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.382055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.382077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.382082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.384896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.384917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.384923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.387664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.387683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.387689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.390520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.390542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.390548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.393346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.393369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.393375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.396232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.396258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.396263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.399133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.399153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.399159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.401936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.401959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.401964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.404750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.404772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.404777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.407555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.407585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.407591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.410425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.410448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.410453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.413236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.413257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.413263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.416111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.416132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.416137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.419116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.419138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.419143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.421971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.421993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.421998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.424896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.424919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.424924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.427755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.427775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.427780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.993 [2024-12-09 05:24:45.430586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.993 [2024-12-09 05:24:45.430607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.993 [2024-12-09 05:24:45.430612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.433404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.433426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.433431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.436291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.436310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.436316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.439146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.439168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.439173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.441924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.441947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.441952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.444646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.444666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.444672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.447480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.447500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.447506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.450381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.450403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.450409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.453346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.453368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.453374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.456134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.456154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.456160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.458835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.458856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.458861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.461709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.461731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.461736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.464595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.464617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.464622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.467444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.467465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.467470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.470348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.470370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.470376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.473260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.473282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.473288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.476207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.476227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.476233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.479064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.479085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.479091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.481850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.481872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.481877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.484714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.484735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.484740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.487516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.487536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.487542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.490419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.490441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.490447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.493322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.493345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.493350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.496220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.496252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.496258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.499137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.499159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.499164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.502043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.502064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.994 [2024-12-09 05:24:45.502069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.994 [2024-12-09 05:24:45.504855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.994 [2024-12-09 05:24:45.504878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.504883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.507732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.507752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.507757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.510451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.510473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.510479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.513327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.513350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.513355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.516198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.516218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.516223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.519069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.519090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.519095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.521807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.521828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.521833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.524694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.524713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.524719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.527541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.527573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.527578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.530439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.530462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.530468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.533315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.533337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.533343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.536248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.536267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.536272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.539128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.539150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.539156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.542005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.542028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.542034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.544893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.544915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.544920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.547756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.547777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.547782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.550682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.550703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.550709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.553529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.553551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.553556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.556423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.556443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.556449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.559342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.559364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.559369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.562256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.562279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.562285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.565072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.565094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.565099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.567841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.567861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.567867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.570699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.570720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.570725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.573547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.573579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.573584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.576447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.576469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.576475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.579343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.579365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.995 [2024-12-09 05:24:45.579371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.995 [2024-12-09 05:24:45.582239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.995 [2024-12-09 05:24:45.582262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.996 [2024-12-09 05:24:45.582268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.996 [2024-12-09 05:24:45.585071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.996 [2024-12-09 05:24:45.585093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.996 [2024-12-09 05:24:45.585099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.996 [2024-12-09 05:24:45.588098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.996 [2024-12-09 05:24:45.588120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.996 [2024-12-09 05:24:45.588125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.996 [2024-12-09 05:24:45.591044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.996 [2024-12-09 05:24:45.591066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.996 [2024-12-09 05:24:45.591071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.996 [2024-12-09 05:24:45.593937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.996 [2024-12-09 05:24:45.593960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.996 [2024-12-09 05:24:45.593966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:53.996 [2024-12-09 05:24:45.596813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.996 [2024-12-09 05:24:45.596836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.996 [2024-12-09 05:24:45.596841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:53.996 [2024-12-09 05:24:45.599684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.996 [2024-12-09 05:24:45.599705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.996 [2024-12-09 05:24:45.599710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:53.996 [2024-12-09 05:24:45.602511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.996 [2024-12-09 05:24:45.602532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.996 [2024-12-09 05:24:45.602537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:53.996 [2024-12-09 05:24:45.605421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:53.996 [2024-12-09 05:24:45.605444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:53.996 [2024-12-09 05:24:45.605449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.257 [2024-12-09 05:24:45.608329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.257 [2024-12-09 05:24:45.608349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.257 [2024-12-09 05:24:45.608355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.257 [2024-12-09 05:24:45.611214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.257 [2024-12-09 05:24:45.611235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.257 [2024-12-09 05:24:45.611240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.257 [2024-12-09 05:24:45.614103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.257 [2024-12-09 05:24:45.614126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.257 [2024-12-09 05:24:45.614131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.257 [2024-12-09 05:24:45.617015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.257 [2024-12-09 05:24:45.617038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.257 [2024-12-09 05:24:45.617043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.257 [2024-12-09 05:24:45.619886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.257 [2024-12-09 05:24:45.619908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.257 [2024-12-09 05:24:45.619913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.257 [2024-12-09 05:24:45.622752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.257 [2024-12-09 05:24:45.622774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.257 [2024-12-09 05:24:45.622779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.257 [2024-12-09 05:24:45.625633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.257 [2024-12-09 05:24:45.625653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.257 [2024-12-09 05:24:45.625658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.257 [2024-12-09 05:24:45.628537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.257 [2024-12-09 05:24:45.628558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.257 [2024-12-09 05:24:45.628573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.257 [2024-12-09 05:24:45.631439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.257 [2024-12-09 05:24:45.631461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.257 [2024-12-09 05:24:45.631466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.257 [2024-12-09 05:24:45.634322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.257 [2024-12-09 05:24:45.634345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.257 [2024-12-09 05:24:45.634351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.257 [2024-12-09 05:24:45.637222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.257 [2024-12-09 05:24:45.637245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.257 [2024-12-09 05:24:45.637251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.257 [2024-12-09 05:24:45.640111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.257 [2024-12-09 05:24:45.640133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.257 [2024-12-09 05:24:45.640139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.257 [2024-12-09 05:24:45.642969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.257 [2024-12-09 05:24:45.642991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.257 [2024-12-09 05:24:45.642996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.257 [2024-12-09 05:24:45.645862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.257 [2024-12-09 05:24:45.645885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.257 [2024-12-09 05:24:45.645890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.257 [2024-12-09 05:24:45.648716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.257 [2024-12-09 05:24:45.648737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.257 [2024-12-09 05:24:45.648743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.257 [2024-12-09 05:24:45.651598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.257 [2024-12-09 05:24:45.651617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.257 [2024-12-09 05:24:45.651623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.257 [2024-12-09 05:24:45.654443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.257 [2024-12-09 05:24:45.654466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.257 [2024-12-09 05:24:45.654471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.257 [2024-12-09 05:24:45.657370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.257 [2024-12-09 05:24:45.657393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.257 [2024-12-09 05:24:45.657398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.257 [2024-12-09 05:24:45.660237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.257 [2024-12-09 05:24:45.660265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.660270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.663097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.663118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.663124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.665997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.666020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.666025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.668800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.668821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.668827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.671663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.671684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.671689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.674498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.674521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.674526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.677360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.677383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.677388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.680260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.680283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.680288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.683168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.683190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.683196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.686083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.686106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.686111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.688964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.688988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.688993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.691799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.691821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.691826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.694693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.694715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.694720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.697550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.697582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.697587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.700434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.700456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.700462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.703314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.703336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.703342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.706179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.706201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.706206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.708996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.709018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.709024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.711896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.711918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.711923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.714761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.714784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.714790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.717648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.717669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.717675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.720587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.720608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.720613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.723489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.723510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.723516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.726370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.726391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.726397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.729288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.729312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.729317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.258 [2024-12-09 05:24:45.732106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.258 [2024-12-09 05:24:45.732127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.258 [2024-12-09 05:24:45.732133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.259 [2024-12-09 05:24:45.734996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.259 [2024-12-09 05:24:45.735017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.259 [2024-12-09 05:24:45.735023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.259 [2024-12-09 05:24:45.737887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.259 [2024-12-09 05:24:45.737909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.259 [2024-12-09 05:24:45.737914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.259 [2024-12-09 05:24:45.740747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.259 [2024-12-09 05:24:45.740767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.259 [2024-12-09 05:24:45.740773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.259 [2024-12-09 05:24:45.743637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.259 [2024-12-09 05:24:45.743657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.259 [2024-12-09 05:24:45.743662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.259 [2024-12-09 05:24:45.746515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.259 [2024-12-09 05:24:45.746538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.259 [2024-12-09 05:24:45.746543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.259 [2024-12-09 05:24:45.749389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.259 [2024-12-09 05:24:45.749412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.259 [2024-12-09 05:24:45.749417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.259 [2024-12-09 05:24:45.752272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.259 [2024-12-09 05:24:45.752292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.259 [2024-12-09 05:24:45.752297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.259 [2024-12-09 05:24:45.755144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.259 [2024-12-09 05:24:45.755165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.259 [2024-12-09 05:24:45.755170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.259 [2024-12-09 05:24:45.758051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.259 [2024-12-09 05:24:45.758074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.259 [2024-12-09 05:24:45.758079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.259 [2024-12-09 05:24:45.761054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.259 [2024-12-09 05:24:45.761075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.259 [2024-12-09 05:24:45.761080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.259 [2024-12-09 05:24:45.763972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.259 [2024-12-09 05:24:45.763993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.259 [2024-12-09 05:24:45.763999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.259 [2024-12-09 05:24:45.766841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.259 [2024-12-09 05:24:45.766861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.259 [2024-12-09 05:24:45.766867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.259 [2024-12-09 05:24:45.769751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.259 [2024-12-09 05:24:45.769773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.259 [2024-12-09 05:24:45.769778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.259 [2024-12-09 05:24:45.772650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.259 [2024-12-09 05:24:45.772670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.259 [2024-12-09 05:24:45.772676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.259 [2024-12-09 05:24:45.775466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.259 [2024-12-09 05:24:45.775487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.259 [2024-12-09 05:24:45.775492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.259 [2024-12-09 05:24:45.778299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.259 [2024-12-09 05:24:45.778320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.259 [2024-12-09 05:24:45.778326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.259 [2024-12-09 05:24:45.781197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.259 [2024-12-09 05:24:45.781219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.259 [2024-12-09 05:24:45.781224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.259 [2024-12-09 05:24:45.783974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.259 [2024-12-09 05:24:45.783994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.259 [2024-12-09 05:24:45.784000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.259 [2024-12-09 05:24:45.786869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.259 [2024-12-09 05:24:45.786889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.259 [2024-12-09 05:24:45.786894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.259 [2024-12-09 05:24:45.789783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.259 [2024-12-09 05:24:45.789806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.259 [2024-12-09 05:24:45.789811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.259 [2024-12-09 05:24:45.792721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.259 [2024-12-09 05:24:45.792741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.792746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.795551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.795585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.795591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.798382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.798404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.798409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.801151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.801174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.801179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.803984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.804005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.804010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.806997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.807018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.807024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.809893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.809916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.809921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.812805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.812826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.812832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.815756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.815777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.815782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.818650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.818670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.818676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.821498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.821521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.821526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.824361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.824382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.824388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.827227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.827247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.827252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.830133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.830155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.830161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.833052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.833074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.833079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.836009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.836030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.836035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.838866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.838887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.838893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.841752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.841773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.841778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.844675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.844695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.844701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.847540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.847575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.847581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.850426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.850449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.850454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.853441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.853464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.853469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.856284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.856305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.856310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.859150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.859172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.859178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.862063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.862085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.862090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.864905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.864927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.864933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.260 [2024-12-09 05:24:45.867726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.260 [2024-12-09 05:24:45.867747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.260 [2024-12-09 05:24:45.867753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.520 [2024-12-09 05:24:45.870614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.520 [2024-12-09 05:24:45.870635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.520 [2024-12-09 05:24:45.870640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.520 [2024-12-09 05:24:45.873473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.520 [2024-12-09 05:24:45.873496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.520 [2024-12-09 05:24:45.873501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.520 [2024-12-09 05:24:45.876364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.520 [2024-12-09 05:24:45.876385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.520 [2024-12-09 05:24:45.876390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.520 [2024-12-09 05:24:45.879175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.520 [2024-12-09 05:24:45.879197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.520 [2024-12-09 05:24:45.879202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.520 [2024-12-09 05:24:45.882045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.520 [2024-12-09 05:24:45.882067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.520 [2024-12-09 05:24:45.882073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.520 [2024-12-09 05:24:45.884861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.520 [2024-12-09 05:24:45.884884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.520 [2024-12-09 05:24:45.884889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.520 [2024-12-09 05:24:45.887717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.520 [2024-12-09 05:24:45.887737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.520 [2024-12-09 05:24:45.887742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.520 [2024-12-09 05:24:45.890593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.520 [2024-12-09 05:24:45.890613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.520 [2024-12-09 05:24:45.890619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.520 [2024-12-09 05:24:45.893472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.520 [2024-12-09 05:24:45.893495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.520 [2024-12-09 05:24:45.893501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.520 [2024-12-09 05:24:45.896322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.520 [2024-12-09 05:24:45.896342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.520 [2024-12-09 05:24:45.896347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.520 [2024-12-09 05:24:45.899307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.520 [2024-12-09 05:24:45.899329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.520 [2024-12-09 05:24:45.899334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.520 [2024-12-09 05:24:45.901978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.520 [2024-12-09 05:24:45.902000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.520 [2024-12-09 05:24:45.902005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.520 [2024-12-09 05:24:45.904791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.520 [2024-12-09 05:24:45.904812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.520 [2024-12-09 05:24:45.904817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.907632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.907651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.907656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.910502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.910524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.910530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.913340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.913362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.913368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.916257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.916277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.916282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.919103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.919124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.919130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.921998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.922020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.922025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.924919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.924939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.924945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.927605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.927625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.927631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.930554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.930585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.930590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.933460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.933483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.933488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.936202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.936223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.936229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.939072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.939093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.939098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.941951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.941973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.941978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.944771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.944799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.944804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.947635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.947656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.947661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.950508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.950531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.950536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.953397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.953419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.953424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.956377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.956398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.956403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.959136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.959158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.959163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.962196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.962218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.962223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.965102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.965126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.965131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.968034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.968055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.968061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.970901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.970923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.970928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.973791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.973813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.973819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.976641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.976662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.976667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.979466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.979488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.521 [2024-12-09 05:24:45.979493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.521 [2024-12-09 05:24:45.982320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.521 [2024-12-09 05:24:45.982342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:45.982347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:45.985153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:45.985175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:45.985180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:45.988028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:45.988049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:45.988054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:45.990905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:45.990927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:45.990932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:45.993785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:45.993808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:45.993813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:45.996665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:45.996685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:45.996690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:45.999532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:45.999553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:45.999569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:46.002434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:46.002456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:46.002461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:46.005300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:46.005322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:46.005327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:46.008342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:46.008363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:46.008369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:46.010484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:46.010505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:46.010510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:46.012612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:46.012631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:46.012637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:46.014903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:46.014924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:46.014930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:46.016606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:46.016625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:46.016631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:46.019000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:46.019020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:46.019025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:46.020959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:46.020980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:46.020986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:46.023227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:46.023249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:46.023254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:46.025273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:46.025294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:46.025300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:46.027295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:46.027316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:46.027321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:46.029350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:46.029372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:46.029377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:46.031492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:46.031512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:46.031518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:46.033278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:46.033300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:46.033305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:46.035529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:46.035550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:46.035555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:46.038466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:46.038488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:46.038493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:46.041299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:46.041321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:46.041326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:46.044186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:46.044207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:46.044213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:46.047040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:46.047061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:46.047067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.522 [2024-12-09 05:24:46.049814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.522 [2024-12-09 05:24:46.049837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.522 [2024-12-09 05:24:46.049842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.523 [2024-12-09 05:24:46.052705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.523 [2024-12-09 05:24:46.052727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.523 [2024-12-09 05:24:46.052732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.523 [2024-12-09 05:24:46.055625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.523 [2024-12-09 05:24:46.055645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.523 [2024-12-09 05:24:46.055650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.523 [2024-12-09 05:24:46.058328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.523 [2024-12-09 05:24:46.058350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.523 [2024-12-09 05:24:46.058356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.523 [2024-12-09 05:24:46.061151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.523 [2024-12-09 05:24:46.061174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.523 [2024-12-09 05:24:46.061179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:54.523 [2024-12-09 05:24:46.064007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.523 [2024-12-09 05:24:46.064029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.523 [2024-12-09 05:24:46.064034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:54.523 [2024-12-09 05:24:46.066798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.523 [2024-12-09 05:24:46.066820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.523 [2024-12-09 05:24:46.066826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:54.523 11064.50 IOPS, 1383.06 MiB/s [2024-12-09T05:24:46.140Z] [2024-12-09 05:24:46.070939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce0e00) 01:29:54.523 [2024-12-09 05:24:46.070959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:54.523 [2024-12-09 05:24:46.070965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:54.523 01:29:54.523 Latency(us) 01:29:54.523 [2024-12-09T05:24:46.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:54.523 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 01:29:54.523 nvme0n1 : 2.00 11062.38 1382.80 0.00 0.00 1443.80 419.05 11090.71 01:29:54.523 [2024-12-09T05:24:46.140Z] =================================================================================================================== 01:29:54.523 [2024-12-09T05:24:46.140Z] Total : 11062.38 1382.80 0.00 0.00 1443.80 419.05 11090.71 01:29:54.523 { 01:29:54.523 "results": [ 01:29:54.523 { 01:29:54.523 "job": "nvme0n1", 01:29:54.523 "core_mask": "0x2", 01:29:54.523 "workload": "randread", 01:29:54.523 "status": "finished", 01:29:54.523 "queue_depth": 16, 01:29:54.523 "io_size": 131072, 01:29:54.523 "runtime": 2.00183, 01:29:54.523 "iops": 11062.377924199358, 01:29:54.523 "mibps": 1382.7972405249197, 01:29:54.523 "io_failed": 0, 01:29:54.523 "io_timeout": 0, 01:29:54.523 "avg_latency_us": 1443.804465811001, 01:29:54.523 "min_latency_us": 419.0523076923077, 01:29:54.523 "max_latency_us": 11090.707692307693 01:29:54.523 } 01:29:54.523 ], 01:29:54.523 "core_count": 1 01:29:54.523 } 01:29:54.523 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:29:54.523 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:29:54.523 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:29:54.523 | .driver_specific 01:29:54.523 | .nvme_error 01:29:54.523 | .status_code 01:29:54.523 | .command_transient_transport_error' 01:29:54.523 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:29:54.781 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 715 > 0 )) 01:29:54.781 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93715 01:29:54.781 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93715 ']' 01:29:54.781 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93715 01:29:54.781 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:29:54.781 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:29:54.781 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93715 01:29:54.781 killing process with pid 93715 01:29:54.781 Received shutdown signal, test time was about 2.000000 seconds 01:29:54.781 01:29:54.781 Latency(us) 01:29:54.781 [2024-12-09T05:24:46.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:54.781 [2024-12-09T05:24:46.398Z] =================================================================================================================== 01:29:54.781 [2024-12-09T05:24:46.398Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:29:54.781 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:29:54.781 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:29:54.781 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93715' 01:29:54.781 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93715 01:29:54.781 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93715 01:29:55.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:29:55.039 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 01:29:55.039 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:29:55.039 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 01:29:55.039 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 01:29:55.039 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 01:29:55.039 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93785 01:29:55.039 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 01:29:55.039 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93785 /var/tmp/bperf.sock 01:29:55.039 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93785 ']' 01:29:55.039 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:29:55.039 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:29:55.039 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:29:55.039 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:29:55.039 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:29:55.039 [2024-12-09 05:24:46.486456] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:29:55.039 [2024-12-09 05:24:46.486505] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93785 ] 01:29:55.039 [2024-12-09 05:24:46.616441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:29:55.039 [2024-12-09 05:24:46.645275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:29:55.297 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:29:55.297 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:29:55.297 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:29:55.297 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:29:55.554 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:29:55.554 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:55.554 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:29:55.554 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:55.554 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:29:55.554 05:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:29:55.812 nvme0n1 01:29:55.812 05:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 01:29:55.812 05:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:55.812 05:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:29:55.812 05:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:55.812 05:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:29:55.812 05:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:29:55.812 Running I/O for 2 seconds... 01:29:55.812 [2024-12-09 05:24:47.314277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef35f0 01:29:55.812 [2024-12-09 05:24:47.315137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:55.812 [2024-12-09 05:24:47.315161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:29:55.812 [2024-12-09 05:24:47.324646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee1f80 01:29:55.812 [2024-12-09 05:24:47.325978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:55.812 [2024-12-09 05:24:47.325996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:29:55.812 [2024-12-09 05:24:47.330750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee1f80 01:29:55.812 [2024-12-09 05:24:47.331369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:55.812 [2024-12-09 05:24:47.331387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:29:55.812 [2024-12-09 05:24:47.341067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef35f0 01:29:55.812 [2024-12-09 05:24:47.342188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:55.812 [2024-12-09 05:24:47.342205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:29:55.812 [2024-12-09 05:24:47.349798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eeb760 01:29:55.812 [2024-12-09 05:24:47.350913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:55.812 [2024-12-09 05:24:47.350930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:29:55.812 [2024-12-09 05:24:47.357980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efef90 01:29:55.812 [2024-12-09 05:24:47.358993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:55.812 [2024-12-09 05:24:47.359010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:29:55.812 [2024-12-09 05:24:47.365744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee6b70 01:29:55.812 [2024-12-09 05:24:47.366432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:55.812 [2024-12-09 05:24:47.366452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:29:55.812 [2024-12-09 05:24:47.374141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee12d8 01:29:55.812 [2024-12-09 05:24:47.374909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:55.812 [2024-12-09 05:24:47.374926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:29:55.812 [2024-12-09 05:24:47.382862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efac10 01:29:55.812 [2024-12-09 05:24:47.383624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:55.812 [2024-12-09 05:24:47.383643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:29:55.812 [2024-12-09 05:24:47.391032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef5378 01:29:55.812 [2024-12-09 05:24:47.391689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:55.812 [2024-12-09 05:24:47.391708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:29:55.812 [2024-12-09 05:24:47.399883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eff3c8 01:29:55.812 [2024-12-09 05:24:47.400539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:55.812 [2024-12-09 05:24:47.400557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:29:55.812 [2024-12-09 05:24:47.408024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eeea00 01:29:55.812 [2024-12-09 05:24:47.408555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:55.812 [2024-12-09 05:24:47.408580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:29:55.812 [2024-12-09 05:24:47.419069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee1710 01:29:55.812 [2024-12-09 05:24:47.420446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:55.812 [2024-12-09 05:24:47.420463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:29:55.812 [2024-12-09 05:24:47.425169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef0350 01:29:55.812 [2024-12-09 05:24:47.425837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:55.812 [2024-12-09 05:24:47.425854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:29:56.071 [2024-12-09 05:24:47.433855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee73e0 01:29:56.071 [2024-12-09 05:24:47.434513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.071 [2024-12-09 05:24:47.434530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:29:56.071 [2024-12-09 05:24:47.442035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee01f8 01:29:56.071 [2024-12-09 05:24:47.442595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.071 [2024-12-09 05:24:47.442612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:29:56.071 [2024-12-09 05:24:47.450474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eebfd0 01:29:56.071 [2024-12-09 05:24:47.451032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.071 [2024-12-09 05:24:47.451049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:29:56.071 [2024-12-09 05:24:47.460758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef4f40 01:29:56.071 [2024-12-09 05:24:47.461799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.071 [2024-12-09 05:24:47.461816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:29:56.071 [2024-12-09 05:24:47.468731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eeb328 01:29:56.071 [2024-12-09 05:24:47.469518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.071 [2024-12-09 05:24:47.469537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:29:56.071 [2024-12-09 05:24:47.477102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee88f8 01:29:56.071 [2024-12-09 05:24:47.477809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.071 [2024-12-09 05:24:47.477827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:29:56.071 [2024-12-09 05:24:47.485234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eebfd0 01:29:56.071 [2024-12-09 05:24:47.485817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.071 [2024-12-09 05:24:47.485834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:29:56.071 [2024-12-09 05:24:47.495994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee49b0 01:29:56.071 [2024-12-09 05:24:47.497304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.071 [2024-12-09 05:24:47.497323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:29:56.071 [2024-12-09 05:24:47.502875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016edf118 01:29:56.071 [2024-12-09 05:24:47.503702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.071 [2024-12-09 05:24:47.503718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:29:56.071 [2024-12-09 05:24:47.511017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efcdd0 01:29:56.071 [2024-12-09 05:24:47.511738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.071 [2024-12-09 05:24:47.511756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:56.071 [2024-12-09 05:24:47.521060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee73e0 01:29:56.071 [2024-12-09 05:24:47.522244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.071 [2024-12-09 05:24:47.522261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:29:56.071 [2024-12-09 05:24:47.528650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eefae0 01:29:56.071 [2024-12-09 05:24:47.529982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.071 [2024-12-09 05:24:47.530000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:29:56.071 [2024-12-09 05:24:47.538171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef5378 01:29:56.071 [2024-12-09 05:24:47.539245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.071 [2024-12-09 05:24:47.539264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:29:56.071 [2024-12-09 05:24:47.546306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef7538 01:29:56.071 [2024-12-09 05:24:47.547258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.071 [2024-12-09 05:24:47.547276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:29:56.071 [2024-12-09 05:24:47.554449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef0bc0 01:29:56.071 [2024-12-09 05:24:47.555290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.071 [2024-12-09 05:24:47.555307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:29:56.071 [2024-12-09 05:24:47.562607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef2510 01:29:56.071 [2024-12-09 05:24:47.563321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.071 [2024-12-09 05:24:47.563338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:29:56.071 [2024-12-09 05:24:47.570749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eec408 01:29:56.071 [2024-12-09 05:24:47.571353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.071 [2024-12-09 05:24:47.571372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:29:56.071 [2024-12-09 05:24:47.580233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef2510 01:29:56.072 [2024-12-09 05:24:47.581206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.072 [2024-12-09 05:24:47.581224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:29:56.072 [2024-12-09 05:24:47.588395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef2948 01:29:56.072 [2024-12-09 05:24:47.589250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.072 [2024-12-09 05:24:47.589267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:29:56.072 [2024-12-09 05:24:47.597234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef20d8 01:29:56.072 [2024-12-09 05:24:47.597854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.072 [2024-12-09 05:24:47.597873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:29:56.072 [2024-12-09 05:24:47.605369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef1868 01:29:56.072 [2024-12-09 05:24:47.605884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.072 [2024-12-09 05:24:47.605902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:29:56.072 [2024-12-09 05:24:47.615137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016edfdc0 01:29:56.072 [2024-12-09 05:24:47.616234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.072 [2024-12-09 05:24:47.616258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:29:56.072 [2024-12-09 05:24:47.623281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efac10 01:29:56.072 [2024-12-09 05:24:47.624266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.072 [2024-12-09 05:24:47.624283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:29:56.072 [2024-12-09 05:24:47.631427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016edf118 01:29:56.072 [2024-12-09 05:24:47.632307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.072 [2024-12-09 05:24:47.632325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:29:56.072 [2024-12-09 05:24:47.639582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee3060 01:29:56.072 [2024-12-09 05:24:47.640328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.072 [2024-12-09 05:24:47.640349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:29:56.072 [2024-12-09 05:24:47.647735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef6890 01:29:56.072 [2024-12-09 05:24:47.648373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.072 [2024-12-09 05:24:47.648393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:29:56.072 [2024-12-09 05:24:47.655876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efd640 01:29:56.072 [2024-12-09 05:24:47.656400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.072 [2024-12-09 05:24:47.656420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:29:56.072 [2024-12-09 05:24:47.666664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016edece0 01:29:56.072 [2024-12-09 05:24:47.667994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.072 [2024-12-09 05:24:47.668014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:29:56.072 [2024-12-09 05:24:47.674910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee0630 01:29:56.072 [2024-12-09 05:24:47.676045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.072 [2024-12-09 05:24:47.676065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:29:56.072 [2024-12-09 05:24:47.682685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee3060 01:29:56.072 [2024-12-09 05:24:47.683507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.072 [2024-12-09 05:24:47.683529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.691061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efef90 01:29:56.331 [2024-12-09 05:24:47.691952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.691972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.701319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef5378 01:29:56.331 [2024-12-09 05:24:47.702708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.702728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.707414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eec408 01:29:56.331 [2024-12-09 05:24:47.708091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.708109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.717712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eee190 01:29:56.331 [2024-12-09 05:24:47.718954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.718973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.725791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eea248 01:29:56.331 [2024-12-09 05:24:47.726670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.726691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.734169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eeb760 01:29:56.331 [2024-12-09 05:24:47.735122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.735142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.744437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eef270 01:29:56.331 [2024-12-09 05:24:47.745893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.745912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.750543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efcdd0 01:29:56.331 [2024-12-09 05:24:47.751285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.751305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.760828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef7100 01:29:56.331 [2024-12-09 05:24:47.762128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.762147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.766999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee95a0 01:29:56.331 [2024-12-09 05:24:47.767514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.767533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.777277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee38d0 01:29:56.331 [2024-12-09 05:24:47.778291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.778310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.785982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef4f40 01:29:56.331 [2024-12-09 05:24:47.786993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.787013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.794151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efd640 01:29:56.331 [2024-12-09 05:24:47.795057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.795077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.802293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eea680 01:29:56.331 [2024-12-09 05:24:47.803073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.803092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.810456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee01f8 01:29:56.331 [2024-12-09 05:24:47.811131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.811151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.818608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef92c0 01:29:56.331 [2024-12-09 05:24:47.819149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.819169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.828728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef7538 01:29:56.331 [2024-12-09 05:24:47.829394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.829491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.836888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016edf550 01:29:56.331 [2024-12-09 05:24:47.837474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.837496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.847021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eebb98 01:29:56.331 [2024-12-09 05:24:47.848417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.848501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.853197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee6b70 01:29:56.331 [2024-12-09 05:24:47.853874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.853893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.863458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016edece0 01:29:56.331 [2024-12-09 05:24:47.864689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.864710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.871104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef96f8 01:29:56.331 [2024-12-09 05:24:47.872411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.872433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.880300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016edfdc0 01:29:56.331 [2024-12-09 05:24:47.881108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.881126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.888539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee7818 01:29:56.331 [2024-12-09 05:24:47.889127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.889150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.897054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef3a28 01:29:56.331 [2024-12-09 05:24:47.897877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.897897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.904932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef5378 01:29:56.331 [2024-12-09 05:24:47.905651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.905670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.914552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efdeb0 01:29:56.331 [2024-12-09 05:24:47.915618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.915638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.922553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef5be8 01:29:56.331 [2024-12-09 05:24:47.923336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.923423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.930945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016edece0 01:29:56.331 [2024-12-09 05:24:47.931857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.931874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:29:56.331 [2024-12-09 05:24:47.941286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eedd58 01:29:56.331 [2024-12-09 05:24:47.942628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.331 [2024-12-09 05:24:47.942648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:29:56.590 [2024-12-09 05:24:47.949978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efc560 01:29:56.590 [2024-12-09 05:24:47.951380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.590 [2024-12-09 05:24:47.951400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:29:56.590 [2024-12-09 05:24:47.958235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef4b08 01:29:56.590 [2024-12-09 05:24:47.959464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.590 [2024-12-09 05:24:47.959484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:29:56.590 [2024-12-09 05:24:47.966366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efbcf0 01:29:56.590 [2024-12-09 05:24:47.967467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.590 [2024-12-09 05:24:47.967487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:29:56.590 [2024-12-09 05:24:47.974549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee3498 01:29:56.590 [2024-12-09 05:24:47.975554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.590 [2024-12-09 05:24:47.975579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:29:56.590 [2024-12-09 05:24:47.982305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef7970 01:29:56.590 [2024-12-09 05:24:47.983019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.590 [2024-12-09 05:24:47.983041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:29:56.590 [2024-12-09 05:24:47.990689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eddc00 01:29:56.590 [2024-12-09 05:24:47.991433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.590 [2024-12-09 05:24:47.991453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:29:56.590 [2024-12-09 05:24:48.000950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eebfd0 01:29:56.590 [2024-12-09 05:24:48.002266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.590 [2024-12-09 05:24:48.002347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:29:56.590 [2024-12-09 05:24:48.009726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016edf988 01:29:56.590 [2024-12-09 05:24:48.010967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.590 [2024-12-09 05:24:48.010988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:29:56.590 [2024-12-09 05:24:48.017905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efeb58 01:29:56.590 [2024-12-09 05:24:48.019043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.590 [2024-12-09 05:24:48.019063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:29:56.590 [2024-12-09 05:24:48.026032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eeaef0 01:29:56.590 [2024-12-09 05:24:48.027043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.590 [2024-12-09 05:24:48.027063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:29:56.590 [2024-12-09 05:24:48.034207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef2510 01:29:56.590 [2024-12-09 05:24:48.035113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.590 [2024-12-09 05:24:48.035132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:29:56.590 [2024-12-09 05:24:48.042343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef3e60 01:29:56.590 [2024-12-09 05:24:48.043122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.590 [2024-12-09 05:24:48.043143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:29:56.590 [2024-12-09 05:24:48.050515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eff3c8 01:29:56.590 [2024-12-09 05:24:48.051198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.590 [2024-12-09 05:24:48.051218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:29:56.590 [2024-12-09 05:24:48.058660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee6b70 01:29:56.590 [2024-12-09 05:24:48.059202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.590 [2024-12-09 05:24:48.059222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:29:56.590 [2024-12-09 05:24:48.068786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef6cc8 01:29:56.590 [2024-12-09 05:24:48.069451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.590 [2024-12-09 05:24:48.069531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:29:56.590 [2024-12-09 05:24:48.076943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eed920 01:29:56.590 [2024-12-09 05:24:48.077529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.591 [2024-12-09 05:24:48.077551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:29:56.591 [2024-12-09 05:24:48.085082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef35f0 01:29:56.591 [2024-12-09 05:24:48.085519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.591 [2024-12-09 05:24:48.085541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:29:56.591 [2024-12-09 05:24:48.094863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eec840 01:29:56.591 [2024-12-09 05:24:48.095902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.591 [2024-12-09 05:24:48.095977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:29:56.591 [2024-12-09 05:24:48.103040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee5658 01:29:56.591 [2024-12-09 05:24:48.103975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.591 [2024-12-09 05:24:48.103997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:29:56.591 [2024-12-09 05:24:48.111164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efd208 01:29:56.591 [2024-12-09 05:24:48.111975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.591 [2024-12-09 05:24:48.111996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:29:56.591 [2024-12-09 05:24:48.119327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee95a0 01:29:56.591 [2024-12-09 05:24:48.120032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.591 [2024-12-09 05:24:48.120051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:29:56.591 [2024-12-09 05:24:48.127456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eee190 01:29:56.591 [2024-12-09 05:24:48.128041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.591 [2024-12-09 05:24:48.128061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:29:56.591 [2024-12-09 05:24:48.138232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eeee38 01:29:56.591 [2024-12-09 05:24:48.139543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.591 [2024-12-09 05:24:48.139632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:29:56.591 [2024-12-09 05:24:48.146402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef8a50 01:29:56.591 [2024-12-09 05:24:48.147684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.591 [2024-12-09 05:24:48.147767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:29:56.591 [2024-12-09 05:24:48.152802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef3a28 01:29:56.591 [2024-12-09 05:24:48.153454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.591 [2024-12-09 05:24:48.153535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:56.591 [2024-12-09 05:24:48.163243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee5658 01:29:56.591 [2024-12-09 05:24:48.164279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.591 [2024-12-09 05:24:48.164361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:56.591 [2024-12-09 05:24:48.171442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee5220 01:29:56.591 [2024-12-09 05:24:48.172356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.591 [2024-12-09 05:24:48.172442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:29:56.591 [2024-12-09 05:24:48.179689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efb048 01:29:56.591 [2024-12-09 05:24:48.180489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.591 [2024-12-09 05:24:48.180587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:29:56.591 [2024-12-09 05:24:48.189967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef81e0 01:29:56.591 [2024-12-09 05:24:48.191353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.591 [2024-12-09 05:24:48.191436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:29:56.591 [2024-12-09 05:24:48.196265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef7538 01:29:56.591 [2024-12-09 05:24:48.196942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.591 [2024-12-09 05:24:48.197024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:29:56.864 [2024-12-09 05:24:48.206698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee4de8 01:29:56.864 [2024-12-09 05:24:48.207870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.864 [2024-12-09 05:24:48.207949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:29:56.864 [2024-12-09 05:24:48.214857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eeee38 01:29:56.864 [2024-12-09 05:24:48.215710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.864 [2024-12-09 05:24:48.215792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.223453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eeff18 01:29:56.865 [2024-12-09 05:24:48.224285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.224371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.231902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efb048 01:29:56.865 [2024-12-09 05:24:48.232864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.232948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.242328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee3d08 01:29:56.865 [2024-12-09 05:24:48.243774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.243855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.248611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efe720 01:29:56.865 [2024-12-09 05:24:48.249333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.249415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.259039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef2d80 01:29:56.865 [2024-12-09 05:24:48.260267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.260351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.266805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eec840 01:29:56.865 [2024-12-09 05:24:48.268075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.268158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.276055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee5658 01:29:56.865 [2024-12-09 05:24:48.276852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.276936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.284288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eebb98 01:29:56.865 [2024-12-09 05:24:48.284928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.285011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.292512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee84c0 01:29:56.865 [2024-12-09 05:24:48.293079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.293160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:29:56.865 29516.00 IOPS, 115.30 MiB/s [2024-12-09T05:24:48.482Z] [2024-12-09 05:24:48.303908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eeee38 01:29:56.865 [2024-12-09 05:24:48.305393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.305472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.310179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eed4e8 01:29:56.865 [2024-12-09 05:24:48.310948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.311027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.318952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef7538 01:29:56.865 [2024-12-09 05:24:48.319719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.319802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.329551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efb8b8 01:29:56.865 [2024-12-09 05:24:48.330942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.331022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.335845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef5378 01:29:56.865 [2024-12-09 05:24:48.336516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.336611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.346295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efac10 01:29:56.865 [2024-12-09 05:24:48.347460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.347533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.354411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eee5c8 01:29:56.865 [2024-12-09 05:24:48.355249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.355272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.362813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee4de8 01:29:56.865 [2024-12-09 05:24:48.363689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.363709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.371386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef20d8 01:29:56.865 [2024-12-09 05:24:48.371911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.371932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.379939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efc128 01:29:56.865 [2024-12-09 05:24:48.380708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.380729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.388069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef35f0 01:29:56.865 [2024-12-09 05:24:48.388724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.388744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.396227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eea680 01:29:56.865 [2024-12-09 05:24:48.396765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.396784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.406998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef1430 01:29:56.865 [2024-12-09 05:24:48.408258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.408278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.415142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef7da8 01:29:56.865 [2024-12-09 05:24:48.416284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.416304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.423304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee8088 01:29:56.865 [2024-12-09 05:24:48.424341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.424361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.431452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016edfdc0 01:29:56.865 [2024-12-09 05:24:48.432366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.432387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.439620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee5658 01:29:56.865 [2024-12-09 05:24:48.440413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.865 [2024-12-09 05:24:48.440433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:29:56.865 [2024-12-09 05:24:48.447762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee6b70 01:29:56.865 [2024-12-09 05:24:48.448440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.866 [2024-12-09 05:24:48.448459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:29:56.866 [2024-12-09 05:24:48.455925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef2510 01:29:56.866 [2024-12-09 05:24:48.456485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.866 [2024-12-09 05:24:48.456505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:29:56.866 [2024-12-09 05:24:48.466051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee49b0 01:29:56.866 [2024-12-09 05:24:48.466753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.866 [2024-12-09 05:24:48.466772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:29:56.866 [2024-12-09 05:24:48.474196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eed920 01:29:56.866 [2024-12-09 05:24:48.474772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:56.866 [2024-12-09 05:24:48.474793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.482345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee95a0 01:29:57.124 [2024-12-09 05:24:48.482820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.482841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.492129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee7c50 01:29:57.124 [2024-12-09 05:24:48.493190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.493209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.500272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eedd58 01:29:57.124 [2024-12-09 05:24:48.501207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.501227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.508439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eeee38 01:29:57.124 [2024-12-09 05:24:48.509264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.509283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.516584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef3a28 01:29:57.124 [2024-12-09 05:24:48.517287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.517306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.524755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef5378 01:29:57.124 [2024-12-09 05:24:48.525343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.525363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.534972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eee5c8 01:29:57.124 [2024-12-09 05:24:48.536162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.536182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.542965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef7100 01:29:57.124 [2024-12-09 05:24:48.543893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.543914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.551367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efe720 01:29:57.124 [2024-12-09 05:24:48.552347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.552368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.559360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef46d0 01:29:57.124 [2024-12-09 05:24:48.560075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.560096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.567754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee1f80 01:29:57.124 [2024-12-09 05:24:48.568598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.568615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.578101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef96f8 01:29:57.124 [2024-12-09 05:24:48.579351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.579371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.584217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016edf988 01:29:57.124 [2024-12-09 05:24:48.584764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.584783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.594482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee4de8 01:29:57.124 [2024-12-09 05:24:48.595526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.595546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.603077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ede8a8 01:29:57.124 [2024-12-09 05:24:48.603758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.603778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.611450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee38d0 01:29:57.124 [2024-12-09 05:24:48.612327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.612348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.619850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef9f68 01:29:57.124 [2024-12-09 05:24:48.620791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.620812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.630118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee7818 01:29:57.124 [2024-12-09 05:24:48.631623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.631643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.636302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef7da8 01:29:57.124 [2024-12-09 05:24:48.637024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.637043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.646545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eed0b0 01:29:57.124 [2024-12-09 05:24:48.647768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.647787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.654535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee6300 01:29:57.124 [2024-12-09 05:24:48.655472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.655494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.662938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef0ff8 01:29:57.124 [2024-12-09 05:24:48.663933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.663952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.670925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eee190 01:29:57.124 [2024-12-09 05:24:48.671655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.671676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.679315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef4b08 01:29:57.124 [2024-12-09 05:24:48.679974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.679993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.687455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efda78 01:29:57.124 [2024-12-09 05:24:48.687995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.688014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.698239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016edfdc0 01:29:57.124 [2024-12-09 05:24:48.699497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.699517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.706420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef9f68 01:29:57.124 [2024-12-09 05:24:48.707583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.707603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.714573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee4de8 01:29:57.124 [2024-12-09 05:24:48.715596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.715617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.721460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016edf988 01:29:57.124 [2024-12-09 05:24:48.722080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.722097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:29:57.124 [2024-12-09 05:24:48.731661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eedd58 01:29:57.124 [2024-12-09 05:24:48.732339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.124 [2024-12-09 05:24:48.732361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:29:57.382 [2024-12-09 05:24:48.739819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef46d0 01:29:57.382 [2024-12-09 05:24:48.740395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.382 [2024-12-09 05:24:48.740416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:29:57.382 [2024-12-09 05:24:48.747968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efa3a0 01:29:57.382 [2024-12-09 05:24:48.748425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.382 [2024-12-09 05:24:48.748446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:29:57.382 [2024-12-09 05:24:48.757763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef0ff8 01:29:57.382 [2024-12-09 05:24:48.758803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.382 [2024-12-09 05:24:48.758823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:29:57.382 [2024-12-09 05:24:48.765927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efbcf0 01:29:57.382 [2024-12-09 05:24:48.766854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.382 [2024-12-09 05:24:48.766874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:29:57.382 [2024-12-09 05:24:48.774073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef3e60 01:29:57.382 [2024-12-09 05:24:48.774881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.382 [2024-12-09 05:24:48.774901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:29:57.382 [2024-12-09 05:24:48.782221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eecc78 01:29:57.382 [2024-12-09 05:24:48.782918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.382 [2024-12-09 05:24:48.782937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:29:57.382 [2024-12-09 05:24:48.790366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef7da8 01:29:57.382 [2024-12-09 05:24:48.790944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.382 [2024-12-09 05:24:48.790964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:29:57.382 [2024-12-09 05:24:48.801160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef0350 01:29:57.382 [2024-12-09 05:24:48.802526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.382 [2024-12-09 05:24:48.802543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:29:57.382 [2024-12-09 05:24:48.809404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee88f8 01:29:57.382 [2024-12-09 05:24:48.810597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.382 [2024-12-09 05:24:48.810614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:29:57.383 [2024-12-09 05:24:48.817603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee3498 01:29:57.383 [2024-12-09 05:24:48.818702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.383 [2024-12-09 05:24:48.818717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:29:57.383 [2024-12-09 05:24:48.825969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef57b0 01:29:57.383 [2024-12-09 05:24:48.826954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.383 [2024-12-09 05:24:48.826971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:57.383 [2024-12-09 05:24:48.834363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efa3a0 01:29:57.383 [2024-12-09 05:24:48.835227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.383 [2024-12-09 05:24:48.835247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:29:57.383 [2024-12-09 05:24:48.844151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef0bc0 01:29:57.383 [2024-12-09 05:24:48.845359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.383 [2024-12-09 05:24:48.845377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:29:57.383 [2024-12-09 05:24:48.852136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef46d0 01:29:57.383 [2024-12-09 05:24:48.853077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.383 [2024-12-09 05:24:48.853097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:29:57.383 [2024-12-09 05:24:48.860534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef1ca0 01:29:57.383 [2024-12-09 05:24:48.861517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.383 [2024-12-09 05:24:48.861534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:29:57.383 [2024-12-09 05:24:48.868548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee8d30 01:29:57.383 [2024-12-09 05:24:48.869267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.383 [2024-12-09 05:24:48.869286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:29:57.383 [2024-12-09 05:24:48.876941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee1710 01:29:57.383 [2024-12-09 05:24:48.877704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.383 [2024-12-09 05:24:48.877721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:29:57.383 [2024-12-09 05:24:48.887257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef0788 01:29:57.383 [2024-12-09 05:24:48.888517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.383 [2024-12-09 05:24:48.888536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:29:57.383 [2024-12-09 05:24:48.893368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efdeb0 01:29:57.383 [2024-12-09 05:24:48.893916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.383 [2024-12-09 05:24:48.893933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:29:57.383 [2024-12-09 05:24:48.903628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eff3c8 01:29:57.383 [2024-12-09 05:24:48.904675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.383 [2024-12-09 05:24:48.904693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:29:57.383 [2024-12-09 05:24:48.911596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efcdd0 01:29:57.383 [2024-12-09 05:24:48.912371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.383 [2024-12-09 05:24:48.912390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:29:57.383 [2024-12-09 05:24:48.919977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef8e88 01:29:57.383 [2024-12-09 05:24:48.920683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.383 [2024-12-09 05:24:48.920700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:29:57.383 [2024-12-09 05:24:48.929460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efc560 01:29:57.383 [2024-12-09 05:24:48.930513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.383 [2024-12-09 05:24:48.930530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:29:57.383 [2024-12-09 05:24:48.937626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eeb760 01:29:57.383 [2024-12-09 05:24:48.938558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.383 [2024-12-09 05:24:48.938581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:29:57.383 [2024-12-09 05:24:48.945771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efac10 01:29:57.383 [2024-12-09 05:24:48.946591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.383 [2024-12-09 05:24:48.946607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:29:57.383 [2024-12-09 05:24:48.953928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eeee38 01:29:57.383 [2024-12-09 05:24:48.954635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.383 [2024-12-09 05:24:48.954652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:29:57.383 [2024-12-09 05:24:48.962062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efda78 01:29:57.383 [2024-12-09 05:24:48.962652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.383 [2024-12-09 05:24:48.962669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:29:57.383 [2024-12-09 05:24:48.972173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee9e10 01:29:57.383 [2024-12-09 05:24:48.972900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.383 [2024-12-09 05:24:48.972917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:29:57.383 [2024-12-09 05:24:48.980332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef7100 01:29:57.383 [2024-12-09 05:24:48.980951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.383 [2024-12-09 05:24:48.980969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:29:57.383 [2024-12-09 05:24:48.988471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eec408 01:29:57.383 [2024-12-09 05:24:48.988965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.383 [2024-12-09 05:24:48.988983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:48.998247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee4140 01:29:57.642 [2024-12-09 05:24:48.999328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:48.999345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.006416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eebb98 01:29:57.642 [2024-12-09 05:24:49.007386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.007404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.014569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eec840 01:29:57.642 [2024-12-09 05:24:49.015409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.015427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.022722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef5be8 01:29:57.642 [2024-12-09 05:24:49.023452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.023470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.032216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eebb98 01:29:57.642 [2024-12-09 05:24:49.033322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.033339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.040375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efd640 01:29:57.642 [2024-12-09 05:24:49.041354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.041371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.048530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eea680 01:29:57.642 [2024-12-09 05:24:49.049397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.049413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.056676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef1ca0 01:29:57.642 [2024-12-09 05:24:49.057415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.057431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.064828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef2948 01:29:57.642 [2024-12-09 05:24:49.065467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.065484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.074892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efb048 01:29:57.642 [2024-12-09 05:24:49.076000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.076016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.083582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ede470 01:29:57.642 [2024-12-09 05:24:49.084700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.084718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.091755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eedd58 01:29:57.642 [2024-12-09 05:24:49.092761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.092780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.099907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee9e10 01:29:57.642 [2024-12-09 05:24:49.100786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.100804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.108061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee9e10 01:29:57.642 [2024-12-09 05:24:49.108844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.108861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.116199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef0788 01:29:57.642 [2024-12-09 05:24:49.116848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.116866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.124384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee0630 01:29:57.642 [2024-12-09 05:24:49.124925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.124941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.134505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee88f8 01:29:57.642 [2024-12-09 05:24:49.135188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.135206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.142661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef3e60 01:29:57.642 [2024-12-09 05:24:49.143190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.143208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.150810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016edfdc0 01:29:57.642 [2024-12-09 05:24:49.151258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.151276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.160601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef7da8 01:29:57.642 [2024-12-09 05:24:49.161630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.161647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.168749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ef2d80 01:29:57.642 [2024-12-09 05:24:49.169655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.169672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.176902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efac10 01:29:57.642 [2024-12-09 05:24:49.177698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.177716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.185037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efd208 01:29:57.642 [2024-12-09 05:24:49.185709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.185726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.193194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eeee38 01:29:57.642 [2024-12-09 05:24:49.193762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.193779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.203958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efd640 01:29:57.642 [2024-12-09 05:24:49.205261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.642 [2024-12-09 05:24:49.205279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:29:57.642 [2024-12-09 05:24:49.212363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efe720 01:29:57.642 [2024-12-09 05:24:49.213653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.643 [2024-12-09 05:24:49.213669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:29:57.643 [2024-12-09 05:24:49.218463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee4de8 01:29:57.643 [2024-12-09 05:24:49.219040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.643 [2024-12-09 05:24:49.219058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:29:57.643 [2024-12-09 05:24:49.228743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ede470 01:29:57.643 [2024-12-09 05:24:49.229809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.643 [2024-12-09 05:24:49.229825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:29:57.643 [2024-12-09 05:24:49.236713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eeb328 01:29:57.643 [2024-12-09 05:24:49.237511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.643 [2024-12-09 05:24:49.237531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:29:57.643 [2024-12-09 05:24:49.245096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee6fa8 01:29:57.643 [2024-12-09 05:24:49.245827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.643 [2024-12-09 05:24:49.245843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:29:57.643 [2024-12-09 05:24:49.253235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016eea248 01:29:57.643 [2024-12-09 05:24:49.253839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.643 [2024-12-09 05:24:49.253856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:29:57.900 [2024-12-09 05:24:49.263991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee9e10 01:29:57.900 [2024-12-09 05:24:49.265331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.900 [2024-12-09 05:24:49.265349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:29:57.900 [2024-12-09 05:24:49.272180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efeb58 01:29:57.900 [2024-12-09 05:24:49.273411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.900 [2024-12-09 05:24:49.273428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:29:57.900 [2024-12-09 05:24:49.280358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee2c28 01:29:57.900 [2024-12-09 05:24:49.281455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.900 [2024-12-09 05:24:49.281472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:29:57.900 [2024-12-09 05:24:49.288545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016efa7d8 01:29:57.900 [2024-12-09 05:24:49.289541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.900 [2024-12-09 05:24:49.289558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:29:57.900 [2024-12-09 05:24:49.296682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81020) with pdu=0x200016ee4578 01:29:57.900 [2024-12-09 05:24:49.297540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:57.900 [2024-12-09 05:24:49.297558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:29:57.900 29623.50 IOPS, 115.72 MiB/s 01:29:57.900 Latency(us) 01:29:57.900 [2024-12-09T05:24:49.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:57.900 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:29:57.900 nvme0n1 : 2.00 29619.37 115.70 0.00 0.00 4316.12 1751.83 11645.24 01:29:57.900 [2024-12-09T05:24:49.517Z] =================================================================================================================== 01:29:57.900 [2024-12-09T05:24:49.517Z] Total : 29619.37 115.70 0.00 0.00 4316.12 1751.83 11645.24 01:29:57.900 { 01:29:57.900 "results": [ 01:29:57.900 { 01:29:57.900 "job": "nvme0n1", 01:29:57.900 "core_mask": "0x2", 01:29:57.900 "workload": "randwrite", 01:29:57.900 "status": "finished", 01:29:57.900 "queue_depth": 128, 01:29:57.900 "io_size": 4096, 01:29:57.900 "runtime": 2.002912, 01:29:57.900 "iops": 29619.374191177645, 01:29:57.900 "mibps": 115.70068043428768, 01:29:57.900 "io_failed": 0, 01:29:57.900 "io_timeout": 0, 01:29:57.900 "avg_latency_us": 4316.124714836786, 01:29:57.900 "min_latency_us": 1751.8276923076924, 01:29:57.900 "max_latency_us": 11645.243076923078 01:29:57.900 } 01:29:57.900 ], 01:29:57.900 "core_count": 1 01:29:57.900 } 01:29:57.900 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:29:57.900 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:29:57.900 | .driver_specific 01:29:57.900 | .nvme_error 01:29:57.900 | .status_code 01:29:57.900 | .command_transient_transport_error' 01:29:57.900 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:29:57.900 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 232 > 0 )) 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93785 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93785 ']' 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93785 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93785 01:29:58.157 killing process with pid 93785 01:29:58.157 Received shutdown signal, test time was about 2.000000 seconds 01:29:58.157 01:29:58.157 Latency(us) 01:29:58.157 [2024-12-09T05:24:49.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:58.157 [2024-12-09T05:24:49.774Z] =================================================================================================================== 01:29:58.157 [2024-12-09T05:24:49.774Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93785' 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93785 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93785 01:29:58.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93856 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93856 /var/tmp/bperf.sock 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93856 ']' 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:29:58.157 05:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:29:58.157 I/O size of 131072 is greater than zero copy threshold (65536). 01:29:58.157 Zero copy mechanism will not be used. 01:29:58.157 [2024-12-09 05:24:49.721749] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:29:58.157 [2024-12-09 05:24:49.721805] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93856 ] 01:29:58.419 [2024-12-09 05:24:49.857129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:29:58.419 [2024-12-09 05:24:49.886547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:29:58.988 05:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:29:58.988 05:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:29:58.988 05:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:29:58.988 05:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:29:59.246 05:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:29:59.246 05:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:59.246 05:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:29:59.246 05:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:59.246 05:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:29:59.246 05:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:29:59.504 nvme0n1 01:29:59.504 05:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 01:29:59.504 05:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:59.504 05:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:29:59.504 05:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:59.504 05:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:29:59.504 05:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:29:59.763 I/O size of 131072 is greater than zero copy threshold (65536). 01:29:59.763 Zero copy mechanism will not be used. 01:29:59.763 Running I/O for 2 seconds... 01:29:59.763 [2024-12-09 05:24:51.182202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.763 [2024-12-09 05:24:51.182263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.763 [2024-12-09 05:24:51.182281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:59.763 [2024-12-09 05:24:51.185259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.763 [2024-12-09 05:24:51.185306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.763 [2024-12-09 05:24:51.185320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:59.763 [2024-12-09 05:24:51.188089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.763 [2024-12-09 05:24:51.188145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.763 [2024-12-09 05:24:51.188164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:59.763 [2024-12-09 05:24:51.191038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.763 [2024-12-09 05:24:51.191084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.763 [2024-12-09 05:24:51.191097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:59.763 [2024-12-09 05:24:51.193833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.193885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.193896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.196635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.196696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.196714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.199400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.199472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.199490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.202217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.202265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.202277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.205007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.205061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.205073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.207764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.207817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.207829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.210517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.210582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.210594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.213301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.213355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.213366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.216036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.216087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.216099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.218775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.218834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.218846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.221553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.221610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.221622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.224343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.224396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.224408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.227128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.227167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.227179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.229904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.229950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.229961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.232719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.232769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.232781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.235486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.235533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.235545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.238254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.238324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.238336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.241039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.241091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.241103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.243785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.243844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.243855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.246579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.246626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.246638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.249388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.249439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.249451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.252146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.252206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.252218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.254918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.254970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.254982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.257739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.257794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.257806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.260496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.260548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.260570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.263330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.263380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.764 [2024-12-09 05:24:51.263392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:59.764 [2024-12-09 05:24:51.266118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.764 [2024-12-09 05:24:51.266170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.266182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.268922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.268973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.268985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.271694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.271749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.271761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.274415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.274474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.274486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.277221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.277271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.277283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.279992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.280044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.280056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.282748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.282801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.282813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.285481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.285552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.285574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.288251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.288310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.288322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.291025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.291084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.291096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.293794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.293843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.293855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.296533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.296589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.296601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.299310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.299371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.299382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.302092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.302142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.302154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.304860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.304919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.304930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.307620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.307671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.307683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.310366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.310417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.310429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.313143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.313202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.313214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.315930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.315981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.315993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.318710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.318761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.318773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.321484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.321531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.321542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.324253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.324309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.324320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.326997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.327047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.327059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.329738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.765 [2024-12-09 05:24:51.329792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.765 [2024-12-09 05:24:51.329805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:59.765 [2024-12-09 05:24:51.332489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.766 [2024-12-09 05:24:51.332540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.766 [2024-12-09 05:24:51.332552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:59.766 [2024-12-09 05:24:51.335267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.766 [2024-12-09 05:24:51.335307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.766 [2024-12-09 05:24:51.335319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:59.766 [2024-12-09 05:24:51.338038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.766 [2024-12-09 05:24:51.338094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.766 [2024-12-09 05:24:51.338106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:59.766 [2024-12-09 05:24:51.340786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.766 [2024-12-09 05:24:51.340837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.766 [2024-12-09 05:24:51.340849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:59.766 [2024-12-09 05:24:51.343539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.766 [2024-12-09 05:24:51.343606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.766 [2024-12-09 05:24:51.343618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:59.766 [2024-12-09 05:24:51.346339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.766 [2024-12-09 05:24:51.346386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.766 [2024-12-09 05:24:51.346398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:59.766 [2024-12-09 05:24:51.349132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.766 [2024-12-09 05:24:51.349184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.766 [2024-12-09 05:24:51.349196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:59.766 [2024-12-09 05:24:51.351908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.766 [2024-12-09 05:24:51.351966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.766 [2024-12-09 05:24:51.351978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:59.766 [2024-12-09 05:24:51.354710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.766 [2024-12-09 05:24:51.354759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.766 [2024-12-09 05:24:51.354771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:59.766 [2024-12-09 05:24:51.357488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.766 [2024-12-09 05:24:51.357544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.766 [2024-12-09 05:24:51.357556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:59.766 [2024-12-09 05:24:51.360297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.766 [2024-12-09 05:24:51.360352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.766 [2024-12-09 05:24:51.360363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:59.766 [2024-12-09 05:24:51.363053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.766 [2024-12-09 05:24:51.363104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.766 [2024-12-09 05:24:51.363115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:59.766 [2024-12-09 05:24:51.365850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.766 [2024-12-09 05:24:51.365902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.766 [2024-12-09 05:24:51.365915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:29:59.766 [2024-12-09 05:24:51.368711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.766 [2024-12-09 05:24:51.368761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.766 [2024-12-09 05:24:51.368773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:29:59.766 [2024-12-09 05:24:51.371614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.766 [2024-12-09 05:24:51.371658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.766 [2024-12-09 05:24:51.371669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:29:59.766 [2024-12-09 05:24:51.374413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.766 [2024-12-09 05:24:51.374478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.766 [2024-12-09 05:24:51.374490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:29:59.766 [2024-12-09 05:24:51.377322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:29:59.766 [2024-12-09 05:24:51.377364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:59.766 [2024-12-09 05:24:51.377377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.027 [2024-12-09 05:24:51.380173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.027 [2024-12-09 05:24:51.380216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.027 [2024-12-09 05:24:51.380228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.027 [2024-12-09 05:24:51.382962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.027 [2024-12-09 05:24:51.383033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.027 [2024-12-09 05:24:51.383045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.027 [2024-12-09 05:24:51.385746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.027 [2024-12-09 05:24:51.385796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.027 [2024-12-09 05:24:51.385808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.027 [2024-12-09 05:24:51.388482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.027 [2024-12-09 05:24:51.388531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.027 [2024-12-09 05:24:51.388543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.027 [2024-12-09 05:24:51.391307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.027 [2024-12-09 05:24:51.391353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.027 [2024-12-09 05:24:51.391365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.027 [2024-12-09 05:24:51.394109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.027 [2024-12-09 05:24:51.394160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.027 [2024-12-09 05:24:51.394172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.027 [2024-12-09 05:24:51.396864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.027 [2024-12-09 05:24:51.396915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.027 [2024-12-09 05:24:51.396927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.027 [2024-12-09 05:24:51.399675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.027 [2024-12-09 05:24:51.399718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.027 [2024-12-09 05:24:51.399730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.027 [2024-12-09 05:24:51.402496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.027 [2024-12-09 05:24:51.402539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.027 [2024-12-09 05:24:51.402551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.027 [2024-12-09 05:24:51.405278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.027 [2024-12-09 05:24:51.405325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.027 [2024-12-09 05:24:51.405337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.027 [2024-12-09 05:24:51.408097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.027 [2024-12-09 05:24:51.408145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.027 [2024-12-09 05:24:51.408157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.027 [2024-12-09 05:24:51.410902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.027 [2024-12-09 05:24:51.410941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.027 [2024-12-09 05:24:51.410953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.027 [2024-12-09 05:24:51.413767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.027 [2024-12-09 05:24:51.413806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.027 [2024-12-09 05:24:51.413818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.027 [2024-12-09 05:24:51.416542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.027 [2024-12-09 05:24:51.416604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.028 [2024-12-09 05:24:51.416617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.028 [2024-12-09 05:24:51.419340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.028 [2024-12-09 05:24:51.419380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.028 [2024-12-09 05:24:51.419392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.028 [2024-12-09 05:24:51.421875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.028 [2024-12-09 05:24:51.422074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.028 [2024-12-09 05:24:51.422088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.028 [2024-12-09 05:24:51.424509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.028 [2024-12-09 05:24:51.424740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.028 [2024-12-09 05:24:51.424759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.028 [2024-12-09 05:24:51.427127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.028 [2024-12-09 05:24:51.427313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.028 [2024-12-09 05:24:51.427330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.028 [2024-12-09 05:24:51.429668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.028 [2024-12-09 05:24:51.429860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.028 [2024-12-09 05:24:51.429883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.028 [2024-12-09 05:24:51.432170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.028 [2024-12-09 05:24:51.432377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.028 [2024-12-09 05:24:51.432396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.028 [2024-12-09 05:24:51.434679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.028 [2024-12-09 05:24:51.434862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.028 [2024-12-09 05:24:51.434881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.028 [2024-12-09 05:24:51.437230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.028 [2024-12-09 05:24:51.437423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.028 [2024-12-09 05:24:51.437443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.028 [2024-12-09 05:24:51.439731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.028 [2024-12-09 05:24:51.439922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.028 [2024-12-09 05:24:51.439944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.028 [2024-12-09 05:24:51.442249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.028 [2024-12-09 05:24:51.442446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.028 [2024-12-09 05:24:51.442465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.028 [2024-12-09 05:24:51.444732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.028 [2024-12-09 05:24:51.444921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.028 [2024-12-09 05:24:51.444940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.028 [2024-12-09 05:24:51.447235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.028 [2024-12-09 05:24:51.447433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.028 [2024-12-09 05:24:51.447452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.028 [2024-12-09 05:24:51.449772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.028 [2024-12-09 05:24:51.449971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.028 [2024-12-09 05:24:51.449990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.028 [2024-12-09 05:24:51.452293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.028 [2024-12-09 05:24:51.452505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.028 [2024-12-09 05:24:51.452524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.028 [2024-12-09 05:24:51.454820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.028 [2024-12-09 05:24:51.455018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.028 [2024-12-09 05:24:51.455037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.028 [2024-12-09 05:24:51.457336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.028 [2024-12-09 05:24:51.457529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.028 [2024-12-09 05:24:51.457546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.028 [2024-12-09 05:24:51.459861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.028 [2024-12-09 05:24:51.460059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.028 [2024-12-09 05:24:51.460078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.028 [2024-12-09 05:24:51.462375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.028 [2024-12-09 05:24:51.462584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.028 [2024-12-09 05:24:51.462602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.028 [2024-12-09 05:24:51.464896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.028 [2024-12-09 05:24:51.465097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.028 [2024-12-09 05:24:51.465117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.028 [2024-12-09 05:24:51.467404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.028 [2024-12-09 05:24:51.467606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.029 [2024-12-09 05:24:51.467624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.029 [2024-12-09 05:24:51.469953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.029 [2024-12-09 05:24:51.470153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.029 [2024-12-09 05:24:51.470172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.029 [2024-12-09 05:24:51.472463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.029 [2024-12-09 05:24:51.472683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.029 [2024-12-09 05:24:51.472702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.029 [2024-12-09 05:24:51.474974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.029 [2024-12-09 05:24:51.475169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.029 [2024-12-09 05:24:51.475188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.029 [2024-12-09 05:24:51.477508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.029 [2024-12-09 05:24:51.477718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.029 [2024-12-09 05:24:51.477741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.029 [2024-12-09 05:24:51.480016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.029 [2024-12-09 05:24:51.480214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.029 [2024-12-09 05:24:51.480233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.029 [2024-12-09 05:24:51.482526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.029 [2024-12-09 05:24:51.482729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.029 [2024-12-09 05:24:51.482753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.029 [2024-12-09 05:24:51.485075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.029 [2024-12-09 05:24:51.485265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.029 [2024-12-09 05:24:51.485279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.029 [2024-12-09 05:24:51.487622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.029 [2024-12-09 05:24:51.487813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.029 [2024-12-09 05:24:51.487833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.029 [2024-12-09 05:24:51.490106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.029 [2024-12-09 05:24:51.490306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.029 [2024-12-09 05:24:51.490326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.029 [2024-12-09 05:24:51.492639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.029 [2024-12-09 05:24:51.492844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.029 [2024-12-09 05:24:51.492864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.029 [2024-12-09 05:24:51.495133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.029 [2024-12-09 05:24:51.495330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.029 [2024-12-09 05:24:51.495350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.029 [2024-12-09 05:24:51.497656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.029 [2024-12-09 05:24:51.497854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.029 [2024-12-09 05:24:51.497874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.029 [2024-12-09 05:24:51.500166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.029 [2024-12-09 05:24:51.500376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.029 [2024-12-09 05:24:51.500389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.029 [2024-12-09 05:24:51.502675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.029 [2024-12-09 05:24:51.502884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.029 [2024-12-09 05:24:51.502904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.029 [2024-12-09 05:24:51.505180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.029 [2024-12-09 05:24:51.505367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.029 [2024-12-09 05:24:51.505389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.029 [2024-12-09 05:24:51.507726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.029 [2024-12-09 05:24:51.507916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.029 [2024-12-09 05:24:51.507934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.029 [2024-12-09 05:24:51.510250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.029 [2024-12-09 05:24:51.510444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.029 [2024-12-09 05:24:51.510462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.029 [2024-12-09 05:24:51.512764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.029 [2024-12-09 05:24:51.512966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.029 [2024-12-09 05:24:51.512985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.029 [2024-12-09 05:24:51.515295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.029 [2024-12-09 05:24:51.515483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.029 [2024-12-09 05:24:51.515502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.029 [2024-12-09 05:24:51.517833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.029 [2024-12-09 05:24:51.518025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.029 [2024-12-09 05:24:51.518044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.029 [2024-12-09 05:24:51.520358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.030 [2024-12-09 05:24:51.520555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.030 [2024-12-09 05:24:51.520584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.030 [2024-12-09 05:24:51.522850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.030 [2024-12-09 05:24:51.523048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.030 [2024-12-09 05:24:51.523067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.030 [2024-12-09 05:24:51.525369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.030 [2024-12-09 05:24:51.525573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.030 [2024-12-09 05:24:51.525589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.030 [2024-12-09 05:24:51.527910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.030 [2024-12-09 05:24:51.528100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.030 [2024-12-09 05:24:51.528119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.030 [2024-12-09 05:24:51.530425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.030 [2024-12-09 05:24:51.530632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.030 [2024-12-09 05:24:51.530651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.030 [2024-12-09 05:24:51.532971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.030 [2024-12-09 05:24:51.533172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.030 [2024-12-09 05:24:51.533212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.030 [2024-12-09 05:24:51.535506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.030 [2024-12-09 05:24:51.535715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.030 [2024-12-09 05:24:51.535735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.030 [2024-12-09 05:24:51.538027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.030 [2024-12-09 05:24:51.538230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.030 [2024-12-09 05:24:51.538249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.030 [2024-12-09 05:24:51.540542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.030 [2024-12-09 05:24:51.540757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.030 [2024-12-09 05:24:51.540777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.030 [2024-12-09 05:24:51.543048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.030 [2024-12-09 05:24:51.543234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.030 [2024-12-09 05:24:51.543254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.030 [2024-12-09 05:24:51.545537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.030 [2024-12-09 05:24:51.545748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.030 [2024-12-09 05:24:51.545767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.030 [2024-12-09 05:24:51.548054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.030 [2024-12-09 05:24:51.548260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.030 [2024-12-09 05:24:51.548280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.030 [2024-12-09 05:24:51.550546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.030 [2024-12-09 05:24:51.550759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.030 [2024-12-09 05:24:51.550776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.030 [2024-12-09 05:24:51.553089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.030 [2024-12-09 05:24:51.553291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.030 [2024-12-09 05:24:51.553307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.030 [2024-12-09 05:24:51.555588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.030 [2024-12-09 05:24:51.555790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.030 [2024-12-09 05:24:51.555808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.030 [2024-12-09 05:24:51.558137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.030 [2024-12-09 05:24:51.558329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.030 [2024-12-09 05:24:51.558345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.030 [2024-12-09 05:24:51.560651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.030 [2024-12-09 05:24:51.560842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.030 [2024-12-09 05:24:51.560858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.030 [2024-12-09 05:24:51.563144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.030 [2024-12-09 05:24:51.563335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.030 [2024-12-09 05:24:51.563352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.030 [2024-12-09 05:24:51.565536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.030 [2024-12-09 05:24:51.565702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.030 [2024-12-09 05:24:51.565719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.030 [2024-12-09 05:24:51.567968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.030 [2024-12-09 05:24:51.568122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.030 [2024-12-09 05:24:51.568139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.031 [2024-12-09 05:24:51.570387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.031 [2024-12-09 05:24:51.570557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.031 [2024-12-09 05:24:51.570583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.031 [2024-12-09 05:24:51.572817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.031 [2024-12-09 05:24:51.572956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.031 [2024-12-09 05:24:51.572973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.031 [2024-12-09 05:24:51.575246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.031 [2024-12-09 05:24:51.575382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.031 [2024-12-09 05:24:51.575399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.031 [2024-12-09 05:24:51.577650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.031 [2024-12-09 05:24:51.577823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.031 [2024-12-09 05:24:51.577845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.031 [2024-12-09 05:24:51.580087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.031 [2024-12-09 05:24:51.580172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.031 [2024-12-09 05:24:51.580190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.031 [2024-12-09 05:24:51.582499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.031 [2024-12-09 05:24:51.582641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.031 [2024-12-09 05:24:51.582658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.031 [2024-12-09 05:24:51.584937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.031 [2024-12-09 05:24:51.585067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.031 [2024-12-09 05:24:51.585084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.031 [2024-12-09 05:24:51.587367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.031 [2024-12-09 05:24:51.587497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.031 [2024-12-09 05:24:51.587514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.031 [2024-12-09 05:24:51.589811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.031 [2024-12-09 05:24:51.589953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.031 [2024-12-09 05:24:51.589969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.031 [2024-12-09 05:24:51.592278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.031 [2024-12-09 05:24:51.592375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.031 [2024-12-09 05:24:51.592392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.031 [2024-12-09 05:24:51.594701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.031 [2024-12-09 05:24:51.594842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.031 [2024-12-09 05:24:51.594859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.031 [2024-12-09 05:24:51.597098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.031 [2024-12-09 05:24:51.597258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.031 [2024-12-09 05:24:51.597276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.031 [2024-12-09 05:24:51.599527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.031 [2024-12-09 05:24:51.599691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.031 [2024-12-09 05:24:51.599708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.031 [2024-12-09 05:24:51.601965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.031 [2024-12-09 05:24:51.602121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.031 [2024-12-09 05:24:51.602138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.031 [2024-12-09 05:24:51.604435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.031 [2024-12-09 05:24:51.604581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.031 [2024-12-09 05:24:51.604597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.031 [2024-12-09 05:24:51.606818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.031 [2024-12-09 05:24:51.606959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.031 [2024-12-09 05:24:51.606976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.031 [2024-12-09 05:24:51.609232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.031 [2024-12-09 05:24:51.609369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.031 [2024-12-09 05:24:51.609385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.031 [2024-12-09 05:24:51.611678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.031 [2024-12-09 05:24:51.611816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.031 [2024-12-09 05:24:51.611834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.031 [2024-12-09 05:24:51.614123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.031 [2024-12-09 05:24:51.614263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.031 [2024-12-09 05:24:51.614280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.031 [2024-12-09 05:24:51.616572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.031 [2024-12-09 05:24:51.616711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.031 [2024-12-09 05:24:51.616728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.032 [2024-12-09 05:24:51.619007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.032 [2024-12-09 05:24:51.619147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.032 [2024-12-09 05:24:51.619164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.032 [2024-12-09 05:24:51.621414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.032 [2024-12-09 05:24:51.621571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.032 [2024-12-09 05:24:51.621587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.032 [2024-12-09 05:24:51.623811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.032 [2024-12-09 05:24:51.623971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.032 [2024-12-09 05:24:51.623988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.032 [2024-12-09 05:24:51.626211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.032 [2024-12-09 05:24:51.626352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.032 [2024-12-09 05:24:51.626369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.032 [2024-12-09 05:24:51.628624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.032 [2024-12-09 05:24:51.628770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.032 [2024-12-09 05:24:51.628787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.032 [2024-12-09 05:24:51.631000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.032 [2024-12-09 05:24:51.631159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.032 [2024-12-09 05:24:51.631173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.032 [2024-12-09 05:24:51.633389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.032 [2024-12-09 05:24:51.633523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.032 [2024-12-09 05:24:51.633540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.032 [2024-12-09 05:24:51.635774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.032 [2024-12-09 05:24:51.635922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.032 [2024-12-09 05:24:51.635939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.032 [2024-12-09 05:24:51.638208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.032 [2024-12-09 05:24:51.638364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.032 [2024-12-09 05:24:51.638381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.325 [2024-12-09 05:24:51.640632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.325 [2024-12-09 05:24:51.640799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.325 [2024-12-09 05:24:51.640816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.325 [2024-12-09 05:24:51.643018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.325 [2024-12-09 05:24:51.643168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.325 [2024-12-09 05:24:51.643185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.325 [2024-12-09 05:24:51.645415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.325 [2024-12-09 05:24:51.645553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.325 [2024-12-09 05:24:51.645579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.325 [2024-12-09 05:24:51.647814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.325 [2024-12-09 05:24:51.647952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.325 [2024-12-09 05:24:51.647969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.325 [2024-12-09 05:24:51.650218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.325 [2024-12-09 05:24:51.650369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.325 [2024-12-09 05:24:51.650386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.325 [2024-12-09 05:24:51.652642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.325 [2024-12-09 05:24:51.652783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.325 [2024-12-09 05:24:51.652800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.325 [2024-12-09 05:24:51.655058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.325 [2024-12-09 05:24:51.655200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.325 [2024-12-09 05:24:51.655216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.325 [2024-12-09 05:24:51.657489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.325 [2024-12-09 05:24:51.657650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.325 [2024-12-09 05:24:51.657666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.325 [2024-12-09 05:24:51.659900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.660037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.660055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.662302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.662461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.662473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.664738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.664894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.664912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.667162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.667318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.667334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.669597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.669733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.669750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.671990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.672134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.672151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.674421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.674557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.674584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.676861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.676998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.677015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.679285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.679430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.679447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.681703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.681844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.681861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.684123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.684271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.684288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.686539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.686688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.686705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.689003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.689151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.689168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.691426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.691600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.691619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.693885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.694031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.694048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.696282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.696425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.696442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.698677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.698818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.698835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.701113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.701250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.701268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.703547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.703703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.703721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.705958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.706106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.706122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.708338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.708476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.708493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.710754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.710892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.710909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.713181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.713324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.713341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.715607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.715749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.715765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.718024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.718184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.718202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.720428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.720577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.720592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.722835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.326 [2024-12-09 05:24:51.722984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.326 [2024-12-09 05:24:51.723001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.326 [2024-12-09 05:24:51.725262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.725398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.725415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.727619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.727768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.727785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.729991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.730151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.730168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.732424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.732597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.732613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.734853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.735013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.735030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.737311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.737456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.737472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.739703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.739864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.739882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.742123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.742260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.742277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.744519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.744675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.744691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.746907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.747046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.747063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.749327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.749470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.749488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.751743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.751878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.751896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.754147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.754294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.754311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.756570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.756729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.756747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.758954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.759116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.759133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.761372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.761532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.761549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.763801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.763935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.763952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.766200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.766357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.766374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.768640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.768768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.768785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.771051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.771180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.771197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.773480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.773629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.773645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.775866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.776025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.776042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.778269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.778428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.778445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.780725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.780863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.780879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.783146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.783281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.783298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.785596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.785746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.785762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.788014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.788171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.327 [2024-12-09 05:24:51.788188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.327 [2024-12-09 05:24:51.790448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.327 [2024-12-09 05:24:51.790596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.790612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.792878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.793021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.793037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.795309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.795447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.795463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.797760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.797917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.797934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.800184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.800336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.800352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.802604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.802759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.802776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.805026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.805184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.805201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.807456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.807620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.807636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.809897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.810038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.810054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.812340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.812491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.812509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.814769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.814902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.814919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.817197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.817327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.817344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.819625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.819777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.819795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.822078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.822219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.822235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.824465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.824622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.824638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.826851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.827013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.827036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.829281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.829421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.829438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.831639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.831798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.831818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.834087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.834222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.834239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.836513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.836669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.836685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.838917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.839084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.839098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.841340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.841478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.841495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.843711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.843852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.843868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.846129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.846280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.846298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.848508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.848661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.848678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.850934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.851074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.851091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.853308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.853463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.328 [2024-12-09 05:24:51.853481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.328 [2024-12-09 05:24:51.855729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.328 [2024-12-09 05:24:51.855868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.855884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.858159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.858319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.858336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.860577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.860710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.860727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.862987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.863126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.863142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.865404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.865557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.865584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.867816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.867959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.867976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.870217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.870354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.870371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.872655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.872778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.872795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.875061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.875198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.875215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.877479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.877642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.877658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.879882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.880045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.880062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.882323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.882484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.882500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.884774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.884910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.884927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.887146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.887286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.887302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.889588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.889729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.889749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.891982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.892127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.892144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.894396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.894555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.894583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.896840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.896973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.896990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.899266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.899402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.899419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.901677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.901823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.901840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.904106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.904249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.904265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.906574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.906724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.906740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.909014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.909154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.909170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.911388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.911537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.911557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.913830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.913980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.913997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.916218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.916383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.916401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.918659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.918825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.918839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.329 [2024-12-09 05:24:51.921098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.329 [2024-12-09 05:24:51.921242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.329 [2024-12-09 05:24:51.921260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.330 [2024-12-09 05:24:51.923512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.330 [2024-12-09 05:24:51.923664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.330 [2024-12-09 05:24:51.923681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.330 [2024-12-09 05:24:51.925948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.330 [2024-12-09 05:24:51.926089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.330 [2024-12-09 05:24:51.926105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.330 [2024-12-09 05:24:51.928402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.330 [2024-12-09 05:24:51.928552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.330 [2024-12-09 05:24:51.928577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.330 [2024-12-09 05:24:51.930817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.330 [2024-12-09 05:24:51.930977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.330 [2024-12-09 05:24:51.930995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.330 [2024-12-09 05:24:51.933245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.330 [2024-12-09 05:24:51.933379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.330 [2024-12-09 05:24:51.933396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.330 [2024-12-09 05:24:51.935673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.330 [2024-12-09 05:24:51.935813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.330 [2024-12-09 05:24:51.935830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.330 [2024-12-09 05:24:51.938070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.330 [2024-12-09 05:24:51.938229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.330 [2024-12-09 05:24:51.938246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.940511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.940682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.940699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.942976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.943119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.943135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.945381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.945532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.945549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.947796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.947941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.947958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.950221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.950356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.950373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.952650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.952802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.952818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.955072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.955210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.955227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.957449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.957621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.957637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.959898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.960027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.960043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.962297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.962444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.962461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.964742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.964878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.964895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.967131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.967278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.967296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.969533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.969684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.969700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.971931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.972075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.972092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.974348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.974501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.974518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.976773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.976935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.976952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.979195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.979355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.979372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.981614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.981767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.981784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.984065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.984200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.984216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.986496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.986657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.986674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.988953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.989091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.989108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.991368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.991506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.991522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.993792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.993920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.993937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.996186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.996352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.996369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.591 [2024-12-09 05:24:51.998619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.591 [2024-12-09 05:24:51.998753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.591 [2024-12-09 05:24:51.998769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.001019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.001175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.001192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.003442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.003595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.003610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.005876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.006006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.006023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.008296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.008437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.008454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.010754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.010900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.010916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.013174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.013337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.013353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.015602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.015736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.015753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.018025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.018158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.018175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.020451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.020599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.020614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.022866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.023014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.023031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.025310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.025450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.025467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.027707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.027864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.027881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.030136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.030285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.030302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.032548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.032698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.032714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.034958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.035098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.035115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.037362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.037516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.037533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.039789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.039931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.039947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.042226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.042370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.042387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.044661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.044799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.044816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.047033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.047187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.047204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.049448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.049600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.049616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.051836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.051990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.052007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.054257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.054400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.054417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.056699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.056845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.056861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.059112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.059248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.059265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.061513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.061678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.061695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.592 [2024-12-09 05:24:52.063958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.592 [2024-12-09 05:24:52.064094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.592 [2024-12-09 05:24:52.064111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.066351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.066483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.066500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.068798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.068933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.068949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.071222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.071358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.071375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.073676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.073822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.073839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.076084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.076258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.076274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.078478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.078646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.078662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.080924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.081060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.081077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.083348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.083498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.083515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.085749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.085910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.085928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.088141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.088294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.088310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.090506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.090655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.090671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.092900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.093061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.093078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.095331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.095475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.095492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.097734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.097884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.097901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.100124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.100289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.100306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.102530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.102685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.102702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.104958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.105106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.105123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.107360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.107506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.107523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.109725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.109862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.109878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.112108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.112269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.112286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.114501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.114671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.114687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.116938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.117083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.117099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.119308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.119448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.119466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.121724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.121891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.121910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.124132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.124289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.124305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.126532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.126681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.126698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.593 [2024-12-09 05:24:52.128944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.593 [2024-12-09 05:24:52.129100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.593 [2024-12-09 05:24:52.129116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.131342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.131489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.131506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.133749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.133911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.133929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.136167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.136329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.136345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.138545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.138695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.138711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.140963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.141100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.141117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.143317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.143466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.143483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.145710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.145854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.145871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.148095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.148267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.148281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.150485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.150658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.150674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.152912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.153054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.153071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.155317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.155467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.155484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.157718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.157875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.157892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.160112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.160273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.160289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.162492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.162641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.162658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.164912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.165048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.165065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.167297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.167433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.167450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.169717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.169856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.169872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.172092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.172266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.172282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.174505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.174664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.174680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.176906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.177046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.177063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.179273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.179431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.179449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.594 12318.00 IOPS, 1539.75 MiB/s [2024-12-09T05:24:52.211Z] [2024-12-09 05:24:52.183189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.183254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.183267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.185983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.186023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.186035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.188756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.188803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.188816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.191471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.191532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.191544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.194249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.194302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.194315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.594 [2024-12-09 05:24:52.197004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.594 [2024-12-09 05:24:52.197076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.594 [2024-12-09 05:24:52.197088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.595 [2024-12-09 05:24:52.199748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.595 [2024-12-09 05:24:52.199801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.595 [2024-12-09 05:24:52.199813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.595 [2024-12-09 05:24:52.202453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.595 [2024-12-09 05:24:52.202519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.595 [2024-12-09 05:24:52.202531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.854 [2024-12-09 05:24:52.205252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.854 [2024-12-09 05:24:52.205305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.854 [2024-12-09 05:24:52.205317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.207994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.208060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.208072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.210760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.210802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.210814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.213513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.213576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.213589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.216267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.216320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.216333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.218992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.219057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.219069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.221745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.221803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.221816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.224505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.224558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.224580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.227222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.227287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.227299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.229982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.230047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.230059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.232740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.232791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.232804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.235447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.235508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.235520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.238212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.238285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.238297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.241033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.241074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.241086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.243778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.243834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.243846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.246537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.246606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.246618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.249319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.249359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.249372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.252071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.252125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.252137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.254833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.254887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.254900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.257604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.257657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.257669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.260356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.260399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.260411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.263103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.263159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.263171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.265861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.265913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.265926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.268608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.268669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.268682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.271404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.271444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.271457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.274137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.274203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.274215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.276890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.855 [2024-12-09 05:24:52.276945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.855 [2024-12-09 05:24:52.276957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.855 [2024-12-09 05:24:52.279651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.279717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.279729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.282413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.282462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.282474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.285175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.285229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.285241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.287902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.287964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.287976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.290696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.290750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.290762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.293467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.293521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.293533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.296240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.296300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.296312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.299010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.299067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.299079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.301861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.301901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.301913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.304660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.304707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.304720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.307408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.307465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.307477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.310177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.310232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.310244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.312948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.313000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.313013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.315739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.315793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.315806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.318523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.318577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.318590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.321356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.321403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.321416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.324208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.324263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.324274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.326993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.327046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.327058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.329792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.329835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.329847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.332596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.332649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.332662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.335386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.335436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.335449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.338184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.338240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.338252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.340994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.341035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.341047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.343753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.343805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.343817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.346530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.346581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.346594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.349355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.349410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.349423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.352182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.856 [2024-12-09 05:24:52.352237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.856 [2024-12-09 05:24:52.352264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.856 [2024-12-09 05:24:52.354971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.355017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.355029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.357781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.357822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.357834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.360594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.360646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.360659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.363326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.363391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.363404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.366154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.366208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.366221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.368940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.368984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.368997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.371724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.371769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.371781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.374520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.374575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.374587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.377296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.377361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.377373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.380113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.380165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.380177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.382898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.382955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.382967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.385761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.385801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.385813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.388529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.388594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.388606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.391312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.391360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.391372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.394144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.394195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.394207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.396949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.396990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.397002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.399718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.399771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.399783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.402521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.402580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.402593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.405325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.405367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.405379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.408166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.408214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.408227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.410976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.411034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.411047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.413785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.413846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.413858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.416556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.416618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.416631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.419341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.419395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.419407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.422150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.422203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.422215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.424898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.424956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.424968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.427709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.427762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.857 [2024-12-09 05:24:52.427774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.857 [2024-12-09 05:24:52.430476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.857 [2024-12-09 05:24:52.430536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.858 [2024-12-09 05:24:52.430548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.858 [2024-12-09 05:24:52.433322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.858 [2024-12-09 05:24:52.433363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.858 [2024-12-09 05:24:52.433375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.858 [2024-12-09 05:24:52.436171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.858 [2024-12-09 05:24:52.436215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.858 [2024-12-09 05:24:52.436227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.858 [2024-12-09 05:24:52.438971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.858 [2024-12-09 05:24:52.439012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.858 [2024-12-09 05:24:52.439024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.858 [2024-12-09 05:24:52.441743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.858 [2024-12-09 05:24:52.441795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.858 [2024-12-09 05:24:52.441807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.858 [2024-12-09 05:24:52.444515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.858 [2024-12-09 05:24:52.444578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.858 [2024-12-09 05:24:52.444590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.858 [2024-12-09 05:24:52.447350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.858 [2024-12-09 05:24:52.447406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.858 [2024-12-09 05:24:52.447419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.858 [2024-12-09 05:24:52.450161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.858 [2024-12-09 05:24:52.450213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.858 [2024-12-09 05:24:52.450225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.858 [2024-12-09 05:24:52.452950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.858 [2024-12-09 05:24:52.453005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.858 [2024-12-09 05:24:52.453017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.858 [2024-12-09 05:24:52.455730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.858 [2024-12-09 05:24:52.455783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.858 [2024-12-09 05:24:52.455795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:00.858 [2024-12-09 05:24:52.458508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.858 [2024-12-09 05:24:52.458572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.858 [2024-12-09 05:24:52.458585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:00.858 [2024-12-09 05:24:52.461294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.858 [2024-12-09 05:24:52.461340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.858 [2024-12-09 05:24:52.461352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:00.858 [2024-12-09 05:24:52.464115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.858 [2024-12-09 05:24:52.464170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.858 [2024-12-09 05:24:52.464183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:00.858 [2024-12-09 05:24:52.466942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:00.858 [2024-12-09 05:24:52.466992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:00.858 [2024-12-09 05:24:52.467004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.118 [2024-12-09 05:24:52.469713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.118 [2024-12-09 05:24:52.469765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.118 [2024-12-09 05:24:52.469777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.118 [2024-12-09 05:24:52.472499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.118 [2024-12-09 05:24:52.472557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.118 [2024-12-09 05:24:52.472580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.118 [2024-12-09 05:24:52.475304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.475351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.475364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.478097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.478150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.478163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.480900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.480961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.480973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.483686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.483749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.483761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.486459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.486515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.486527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.489304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.489345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.489357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.492074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.492133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.492146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.494888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.494944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.494956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.497714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.497752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.497764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.500507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.500548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.500570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.503345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.503393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.503405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.506140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.506194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.506206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.508943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.508996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.509008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.511704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.511754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.511767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.514483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.514540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.514552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.517301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.517355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.517367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.520112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.520170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.520183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.522945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.523011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.523023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.525718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.525758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.525770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.528491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.528548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.528570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.531331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.531379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.531391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.534119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.534177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.534189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.536949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.537003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.537015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.539744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.539782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.539794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.542472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.542533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.542545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.545300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.545344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.545357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.119 [2024-12-09 05:24:52.548070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.119 [2024-12-09 05:24:52.548133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.119 [2024-12-09 05:24:52.548145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.550842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.550896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.550908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.553685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.553725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.553737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.556518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.556569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.556581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.559334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.559376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.559389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.562141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.562194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.562206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.564954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.565015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.565027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.567726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.567778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.567790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.570465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.570509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.570521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.573250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.573311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.573324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.576045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.576091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.576103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.578840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.578893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.578906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.581673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.581731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.581744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.584453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.584504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.584516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.587253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.587304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.587316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.590074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.590118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.590130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.592850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.592908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.592920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.595634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.595690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.595702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.598400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.598465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.598477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.601214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.601255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.601268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.604007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.604059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.604071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.606772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.606824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.606836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.609537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.609615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.609627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.612357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.612418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.612430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.615120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.615176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.615188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.617953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.618006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.618018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.620739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.620782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.620794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.120 [2024-12-09 05:24:52.623524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.120 [2024-12-09 05:24:52.623587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.120 [2024-12-09 05:24:52.623599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.626309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.626354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.626366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.629100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.629153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.629165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.631861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.631913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.631924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.634670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.634717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.634730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.637479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.637521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.637533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.640269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.640330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.640342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.643086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.643126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.643139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.645903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.645942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.645954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.648738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.648790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.648802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.651503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.651569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.651582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.654331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.654372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.654384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.657105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.657157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.657170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.659896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.659937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.659949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.662710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.662749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.662761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.665484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.665545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.665557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.668325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.668369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.668381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.671129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.671182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.671194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.673906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.673972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.673984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.676717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.676778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.676790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.679466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.679519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.679532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.682256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.682297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.682308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.685077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.685118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.685130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.687849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.687890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.687902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.690618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.690672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.690684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.693428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.693470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.693482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.696203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.696257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.696270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.699016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.121 [2024-12-09 05:24:52.699057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.121 [2024-12-09 05:24:52.699069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.121 [2024-12-09 05:24:52.701864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.122 [2024-12-09 05:24:52.701904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.122 [2024-12-09 05:24:52.701916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.122 [2024-12-09 05:24:52.704676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.122 [2024-12-09 05:24:52.704721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.122 [2024-12-09 05:24:52.704733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.122 [2024-12-09 05:24:52.707451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.122 [2024-12-09 05:24:52.707512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.122 [2024-12-09 05:24:52.707524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.122 [2024-12-09 05:24:52.710317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.122 [2024-12-09 05:24:52.710365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.122 [2024-12-09 05:24:52.710377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.122 [2024-12-09 05:24:52.713150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.122 [2024-12-09 05:24:52.713204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.122 [2024-12-09 05:24:52.713216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.122 [2024-12-09 05:24:52.715965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.122 [2024-12-09 05:24:52.716008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.122 [2024-12-09 05:24:52.716020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.122 [2024-12-09 05:24:52.718792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.122 [2024-12-09 05:24:52.718833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.122 [2024-12-09 05:24:52.718845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.122 [2024-12-09 05:24:52.721600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.122 [2024-12-09 05:24:52.721655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.122 [2024-12-09 05:24:52.721667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.122 [2024-12-09 05:24:52.724369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.122 [2024-12-09 05:24:52.724426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.122 [2024-12-09 05:24:52.724438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.122 [2024-12-09 05:24:52.727148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.122 [2024-12-09 05:24:52.727201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.122 [2024-12-09 05:24:52.727213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.122 [2024-12-09 05:24:52.729916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.122 [2024-12-09 05:24:52.729968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.122 [2024-12-09 05:24:52.729980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.732745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.732788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.732800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.735547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.735598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.735610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.738344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.738389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.738401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.741133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.741205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.741217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.743884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.743945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.743957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.746719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.746759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.746771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.749537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.749594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.749606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.752334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.752379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.752391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.755132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.755185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.755197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.757925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.757985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.757997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.760744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.760785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.760797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.763506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.763570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.763582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.766280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.766327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.766340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.769115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.769155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.769167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.771884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.771937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.771949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.774707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.774751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.774763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.777482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.777542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.777554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.780346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.780397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.780409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.783150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.783203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.783215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.785946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.786003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.786015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.788793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.788833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.788845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.791592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.791640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.791653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.794370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.794423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.794436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.382 [2024-12-09 05:24:52.797167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.382 [2024-12-09 05:24:52.797215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.382 [2024-12-09 05:24:52.797227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.799934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.799995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.800007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.802726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.802781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.802794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.805527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.805588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.805600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.808344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.808399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.808411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.811150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.811202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.811213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.813926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.813981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.813992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.816706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.816757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.816769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.819452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.819512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.819524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.822300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.822339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.822351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.825115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.825166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.825178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.827858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.827908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.827920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.830636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.830687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.830699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.833398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.833448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.833460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.836199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.836240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.836259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.838968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.839020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.839032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.841775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.841813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.841825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.844598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.844649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.844661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.847381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.847428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.847440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.850226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.850265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.850276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.853022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.853082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.853094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.855758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.855808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.855820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.858522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.858577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.858589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.861327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.861381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.861393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.864099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.864160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.864172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.866912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.866964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.866976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.869693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.869744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.383 [2024-12-09 05:24:52.869756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.383 [2024-12-09 05:24:52.872487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.383 [2024-12-09 05:24:52.872537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.872549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.875291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.875334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.875346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.878102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.878143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.878155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.880918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.880960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.880972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.883743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.883783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.883795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.886569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.886608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.886620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.889366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.889408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.889420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.892151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.892204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.892216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.894934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.894985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.894997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.897781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.897819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.897831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.900605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.900649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.900661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.903358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.903414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.903426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.906153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.906203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.906215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.908936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.908989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.909001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.911732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.911774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.911785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.914507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.914558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.914579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.917299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.917342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.917353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.920078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.920122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.920134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.922874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.922920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.922932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.925659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.925715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.925727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.928413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.928464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.928476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.931173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.931225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.931237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.933916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.933971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.933983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.936653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.936725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.936737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.939412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.939464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.939475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.942171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.942224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.942235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.944904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.384 [2024-12-09 05:24:52.944964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.384 [2024-12-09 05:24:52.944975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.384 [2024-12-09 05:24:52.947689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.385 [2024-12-09 05:24:52.947742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.385 [2024-12-09 05:24:52.947754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.385 [2024-12-09 05:24:52.950452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.385 [2024-12-09 05:24:52.950500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.385 [2024-12-09 05:24:52.950512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.385 [2024-12-09 05:24:52.953206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.385 [2024-12-09 05:24:52.953270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.385 [2024-12-09 05:24:52.953282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.385 [2024-12-09 05:24:52.955966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.385 [2024-12-09 05:24:52.956026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.385 [2024-12-09 05:24:52.956038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.385 [2024-12-09 05:24:52.958737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.385 [2024-12-09 05:24:52.958788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.385 [2024-12-09 05:24:52.958800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.385 [2024-12-09 05:24:52.961485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.385 [2024-12-09 05:24:52.961538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.385 [2024-12-09 05:24:52.961549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.385 [2024-12-09 05:24:52.964215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.385 [2024-12-09 05:24:52.964293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.385 [2024-12-09 05:24:52.964305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.385 [2024-12-09 05:24:52.966992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.385 [2024-12-09 05:24:52.967044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.385 [2024-12-09 05:24:52.967056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.385 [2024-12-09 05:24:52.969759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.385 [2024-12-09 05:24:52.969814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.385 [2024-12-09 05:24:52.969826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.385 [2024-12-09 05:24:52.972511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.385 [2024-12-09 05:24:52.972578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.385 [2024-12-09 05:24:52.972590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.385 [2024-12-09 05:24:52.975259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.385 [2024-12-09 05:24:52.975318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.385 [2024-12-09 05:24:52.975330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.385 [2024-12-09 05:24:52.978048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.385 [2024-12-09 05:24:52.978105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.385 [2024-12-09 05:24:52.978117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.385 [2024-12-09 05:24:52.980815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.385 [2024-12-09 05:24:52.980866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.385 [2024-12-09 05:24:52.980878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.385 [2024-12-09 05:24:52.983594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.385 [2024-12-09 05:24:52.983645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.385 [2024-12-09 05:24:52.983657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.385 [2024-12-09 05:24:52.986319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.385 [2024-12-09 05:24:52.986372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.385 [2024-12-09 05:24:52.986384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.385 [2024-12-09 05:24:52.989076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.385 [2024-12-09 05:24:52.989128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.385 [2024-12-09 05:24:52.989140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.385 [2024-12-09 05:24:52.991821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.385 [2024-12-09 05:24:52.991880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.385 [2024-12-09 05:24:52.991892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.385 [2024-12-09 05:24:52.994554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.385 [2024-12-09 05:24:52.994623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.385 [2024-12-09 05:24:52.994635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.645 [2024-12-09 05:24:52.997326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.645 [2024-12-09 05:24:52.997378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.645 [2024-12-09 05:24:52.997391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.645 [2024-12-09 05:24:53.000093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.645 [2024-12-09 05:24:53.000135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.645 [2024-12-09 05:24:53.000146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.645 [2024-12-09 05:24:53.002838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.645 [2024-12-09 05:24:53.002889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.645 [2024-12-09 05:24:53.002901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.645 [2024-12-09 05:24:53.005632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.645 [2024-12-09 05:24:53.005680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.645 [2024-12-09 05:24:53.005691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.645 [2024-12-09 05:24:53.008372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.645 [2024-12-09 05:24:53.008423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.645 [2024-12-09 05:24:53.008435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.645 [2024-12-09 05:24:53.011090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.645 [2024-12-09 05:24:53.011160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.645 [2024-12-09 05:24:53.011172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.645 [2024-12-09 05:24:53.013843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.645 [2024-12-09 05:24:53.013895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.645 [2024-12-09 05:24:53.013907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.645 [2024-12-09 05:24:53.016657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.645 [2024-12-09 05:24:53.016703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.645 [2024-12-09 05:24:53.016715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.645 [2024-12-09 05:24:53.019379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.645 [2024-12-09 05:24:53.019450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.645 [2024-12-09 05:24:53.019462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.645 [2024-12-09 05:24:53.022166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.645 [2024-12-09 05:24:53.022227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.645 [2024-12-09 05:24:53.022239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.645 [2024-12-09 05:24:53.024959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.645 [2024-12-09 05:24:53.025013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.645 [2024-12-09 05:24:53.025026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.645 [2024-12-09 05:24:53.027742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.645 [2024-12-09 05:24:53.027794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.645 [2024-12-09 05:24:53.027806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.645 [2024-12-09 05:24:53.030482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.645 [2024-12-09 05:24:53.030547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.645 [2024-12-09 05:24:53.030570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.033305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.033347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.033360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.036105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.036159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.036171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.038936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.038988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.039001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.041751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.041804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.041817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.044516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.044555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.044577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.047298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.047347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.047360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.050093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.050146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.050158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.052920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.052961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.052973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.055722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.055766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.055779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.058524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.058588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.058600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.061345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.061386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.061398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.064132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.064192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.064204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.066935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.066982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.066994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.069780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.069822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.069834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.072624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.072673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.072686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.075389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.075448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.075460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.078168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.078234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.078246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.081024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.081064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.081077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.083806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.083858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.083870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.086594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.086645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.086657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.089363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.089403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.089415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.092138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.092194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.092206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.094932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.094975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.094987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.097741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.097784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.097797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.100504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.100557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.100579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.103327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.103374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.646 [2024-12-09 05:24:53.103386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.646 [2024-12-09 05:24:53.106149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.646 [2024-12-09 05:24:53.106202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.106215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.108924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.108976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.108989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.111721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.111782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.111794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.114518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.114580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.114593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.117316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.117360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.117373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.120121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.120176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.120189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.122901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.122952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.122964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.125719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.125770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.125782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.128511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.128551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.128573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.131308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.131354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.131366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.134130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.134186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.134198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.136905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.136965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.136977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.139726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.139768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.139780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.142447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.142508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.142521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.145232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.145293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.145305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.148036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.148091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.148104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.150856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.150909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.150921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.153671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.153731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.153744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.156440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.156493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.156504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.159233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.159287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.159300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.162050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.162103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.162115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.164804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.164857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.164869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.167591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.167634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.167646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.170361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.170425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.170438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.173140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.173201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.173214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.175964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.176013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.176024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:30:01.647 [2024-12-09 05:24:53.178714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.647 [2024-12-09 05:24:53.178767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.647 [2024-12-09 05:24:53.178779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:30:01.647 11710.50 IOPS, 1463.81 MiB/s [2024-12-09T05:24:53.264Z] [2024-12-09 05:24:53.182327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a81360) with pdu=0x200016eff3c8 01:30:01.648 [2024-12-09 05:24:53.182369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:30:01.648 [2024-12-09 05:24:53.182382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:30:01.648 01:30:01.648 Latency(us) 01:30:01.648 [2024-12-09T05:24:53.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:30:01.648 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 01:30:01.648 nvme0n1 : 2.00 11705.56 1463.19 0.00 0.00 1363.78 882.22 3705.30 01:30:01.648 [2024-12-09T05:24:53.265Z] =================================================================================================================== 01:30:01.648 [2024-12-09T05:24:53.265Z] Total : 11705.56 1463.19 0.00 0.00 1363.78 882.22 3705.30 01:30:01.648 { 01:30:01.648 "results": [ 01:30:01.648 { 01:30:01.648 "job": "nvme0n1", 01:30:01.648 "core_mask": "0x2", 01:30:01.648 "workload": "randwrite", 01:30:01.648 "status": "finished", 01:30:01.648 "queue_depth": 16, 01:30:01.648 "io_size": 131072, 01:30:01.648 "runtime": 2.002724, 01:30:01.648 "iops": 11705.557031323338, 01:30:01.648 "mibps": 1463.1946289154173, 01:30:01.648 "io_failed": 0, 01:30:01.648 "io_timeout": 0, 01:30:01.648 "avg_latency_us": 1363.7817601449015, 01:30:01.648 "min_latency_us": 882.2153846153847, 01:30:01.648 "max_latency_us": 3705.3046153846153 01:30:01.648 } 01:30:01.648 ], 01:30:01.648 "core_count": 1 01:30:01.648 } 01:30:01.648 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:30:01.648 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:30:01.648 | .driver_specific 01:30:01.648 | .nvme_error 01:30:01.648 | .status_code 01:30:01.648 | .command_transient_transport_error' 01:30:01.648 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:30:01.648 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:30:01.907 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 757 > 0 )) 01:30:01.907 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93856 01:30:01.907 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93856 ']' 01:30:01.907 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93856 01:30:01.907 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:30:01.907 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:30:01.907 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93856 01:30:01.907 killing process with pid 93856 01:30:01.907 Received shutdown signal, test time was about 2.000000 seconds 01:30:01.907 01:30:01.907 Latency(us) 01:30:01.907 [2024-12-09T05:24:53.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:30:01.907 [2024-12-09T05:24:53.524Z] =================================================================================================================== 01:30:01.907 [2024-12-09T05:24:53.524Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:30:01.907 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:30:01.907 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:30:01.907 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93856' 01:30:01.907 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93856 01:30:01.907 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93856 01:30:02.165 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 93585 01:30:02.165 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93585 ']' 01:30:02.165 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93585 01:30:02.165 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:30:02.165 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:30:02.165 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93585 01:30:02.165 killing process with pid 93585 01:30:02.165 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:30:02.165 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:30:02.165 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93585' 01:30:02.165 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93585 01:30:02.165 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93585 01:30:02.165 ************************************ 01:30:02.165 END TEST nvmf_digest_error 01:30:02.165 ************************************ 01:30:02.165 01:30:02.165 real 0m15.310s 01:30:02.165 user 0m29.366s 01:30:02.165 sys 0m3.238s 01:30:02.165 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 01:30:02.165 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:30:02.165 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 01:30:02.165 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 01:30:02.165 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 01:30:02.165 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 01:30:02.423 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:30:02.423 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 01:30:02.423 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 01:30:02.423 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:30:02.423 rmmod nvme_tcp 01:30:02.423 rmmod nvme_fabrics 01:30:02.423 rmmod nvme_keyring 01:30:02.423 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 93585 ']' 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 93585 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 93585 ']' 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 93585 01:30:02.424 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (93585) - No such process 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 93585 is not found' 01:30:02.424 Process with pid 93585 is not found 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:30:02.424 05:24:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:30:02.424 05:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:30:02.683 05:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:30:02.683 05:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:30:02.683 05:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:30:02.683 05:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 01:30:02.683 05:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:30:02.683 05:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:30:02.683 05:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:30:02.683 05:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 01:30:02.683 01:30:02.683 real 0m32.797s 01:30:02.683 user 1m1.230s 01:30:02.683 sys 0m7.123s 01:30:02.683 05:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 01:30:02.683 05:24:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:30:02.683 ************************************ 01:30:02.683 END TEST nvmf_digest 01:30:02.683 ************************************ 01:30:02.683 05:24:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 01:30:02.683 05:24:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 01:30:02.683 05:24:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 01:30:02.683 05:24:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:30:02.683 05:24:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:30:02.683 05:24:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:30:02.683 ************************************ 01:30:02.683 START TEST nvmf_mdns_discovery 01:30:02.683 ************************************ 01:30:02.683 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 01:30:02.683 * Looking for test storage... 01:30:02.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # lcov --version 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:30:02.684 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 01:30:02.943 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 01:30:02.943 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 01:30:02.943 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 01:30:02.943 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:30:02.943 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 01:30:02.943 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 01:30:02.943 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:30:02.943 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:30:02.943 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 01:30:02.943 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:30:02.943 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:30:02.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:02.943 --rc genhtml_branch_coverage=1 01:30:02.943 --rc genhtml_function_coverage=1 01:30:02.943 --rc genhtml_legend=1 01:30:02.943 --rc geninfo_all_blocks=1 01:30:02.943 --rc geninfo_unexecuted_blocks=1 01:30:02.943 01:30:02.943 ' 01:30:02.943 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:30:02.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:02.943 --rc genhtml_branch_coverage=1 01:30:02.943 --rc genhtml_function_coverage=1 01:30:02.943 --rc genhtml_legend=1 01:30:02.943 --rc geninfo_all_blocks=1 01:30:02.943 --rc geninfo_unexecuted_blocks=1 01:30:02.943 01:30:02.943 ' 01:30:02.943 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:30:02.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:02.943 --rc genhtml_branch_coverage=1 01:30:02.943 --rc genhtml_function_coverage=1 01:30:02.943 --rc genhtml_legend=1 01:30:02.943 --rc geninfo_all_blocks=1 01:30:02.943 --rc geninfo_unexecuted_blocks=1 01:30:02.943 01:30:02.943 ' 01:30:02.943 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:30:02.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:02.943 --rc genhtml_branch_coverage=1 01:30:02.943 --rc genhtml_function_coverage=1 01:30:02.943 --rc genhtml_legend=1 01:30:02.943 --rc geninfo_all_blocks=1 01:30:02.943 --rc geninfo_unexecuted_blocks=1 01:30:02.943 01:30:02.943 ' 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:30:02.944 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:30:02.944 Cannot find device "nvmf_init_br" 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:30:02.944 Cannot find device "nvmf_init_br2" 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:30:02.944 Cannot find device "nvmf_tgt_br" 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # true 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:30:02.944 Cannot find device "nvmf_tgt_br2" 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # true 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:30:02.944 Cannot find device "nvmf_init_br" 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:30:02.944 Cannot find device "nvmf_init_br2" 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:30:02.944 Cannot find device "nvmf_tgt_br" 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # true 01:30:02.944 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:30:02.945 Cannot find device "nvmf_tgt_br2" 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # true 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:30:02.945 Cannot find device "nvmf_br" 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # true 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:30:02.945 Cannot find device "nvmf_init_if" 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # true 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:30:02.945 Cannot find device "nvmf_init_if2" 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # true 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:30:02.945 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # true 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:30:02.945 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # true 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:30:02.945 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:30:03.204 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:30:03.204 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 01:30:03.204 01:30:03.204 --- 10.0.0.3 ping statistics --- 01:30:03.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:30:03.204 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:30:03.204 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:30:03.204 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 01:30:03.204 01:30:03.204 --- 10.0.0.4 ping statistics --- 01:30:03.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:30:03.204 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:30:03.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:30:03.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 01:30:03.204 01:30:03.204 --- 10.0.0.1 ping statistics --- 01:30:03.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:30:03.204 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:30:03.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:30:03.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 01:30:03.204 01:30:03.204 --- 10.0.0.2 ping statistics --- 01:30:03.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:30:03.204 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@461 -- # return 0 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@509 -- # nvmfpid=94196 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@510 -- # waitforlisten 94196 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 94196 ']' 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 01:30:03.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 01:30:03.204 05:24:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:03.204 [2024-12-09 05:24:54.661691] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:30:03.204 [2024-12-09 05:24:54.661732] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:30:03.204 [2024-12-09 05:24:54.792708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:30:03.462 [2024-12-09 05:24:54.827004] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:30:03.462 [2024-12-09 05:24:54.827041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:30:03.462 [2024-12-09 05:24:54.827047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:30:03.462 [2024-12-09 05:24:54.827052] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:30:03.462 [2024-12-09 05:24:54.827057] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:30:03.462 [2024-12-09 05:24:54.827301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:30:04.032 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:30:04.032 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 01:30:04.032 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:30:04.032 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 01:30:04.032 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:04.032 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:30:04.032 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 01:30:04.032 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:04.032 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:04.032 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:04.032 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 01:30:04.032 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:04.032 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:04.295 [2024-12-09 05:24:55.726669] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:04.295 [2024-12-09 05:24:55.734770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:04.295 null0 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:04.295 null1 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:04.295 null2 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:04.295 null3 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:04.295 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=94246 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 94246 /tmp/host.sock 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 94246 ']' 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:04.295 05:24:55 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 01:30:04.295 [2024-12-09 05:24:55.822830] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:30:04.295 [2024-12-09 05:24:55.823046] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94246 ] 01:30:04.555 [2024-12-09 05:24:55.962682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:30:04.555 [2024-12-09 05:24:55.997965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:30:05.126 05:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:30:05.126 05:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 01:30:05.126 05:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 01:30:05.126 05:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 01:30:05.126 05:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 01:30:05.441 05:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=94276 01:30:05.441 05:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 01:30:05.441 05:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 01:30:05.441 05:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 01:30:05.441 Process 1072 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 01:30:05.441 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 01:30:05.441 Successfully dropped root privileges. 01:30:06.378 avahi-daemon 0.8 starting up. 01:30:06.378 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 01:30:06.378 Successfully called chroot(). 01:30:06.378 Successfully dropped remaining capabilities. 01:30:06.378 No service file found in /etc/avahi/services. 01:30:06.378 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 01:30:06.378 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 01:30:06.378 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 01:30:06.378 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 01:30:06.378 Network interface enumeration completed. 01:30:06.378 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 01:30:06.378 Registering new address record for 10.0.0.4 on nvmf_tgt_if2.IPv4. 01:30:06.378 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 01:30:06.378 Registering new address record for 10.0.0.3 on nvmf_tgt_if.IPv4. 01:30:06.378 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 3419298960. 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:06.378 05:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:06.636 [2024-12-09 05:24:58.011091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:06.636 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:06.637 05:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 01:30:06.637 [2024-12-09 05:24:58.084770] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 01:30:07.569 [2024-12-09 05:24:58.984780] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 01:30:07.827 [2024-12-09 05:24:59.284791] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 01:30:07.827 [2024-12-09 05:24:59.284806] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 01:30:07.827 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:30:07.827 cookie is 0 01:30:07.827 is_local: 1 01:30:07.827 our_own: 0 01:30:07.827 wide_area: 0 01:30:07.827 multicast: 1 01:30:07.827 cached: 1 01:30:07.827 [2024-12-09 05:24:59.384788] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 01:30:07.827 [2024-12-09 05:24:59.384802] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 01:30:07.827 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:30:07.827 cookie is 0 01:30:07.827 is_local: 1 01:30:07.827 our_own: 0 01:30:07.827 wide_area: 0 01:30:07.827 multicast: 1 01:30:07.827 cached: 1 01:30:08.760 [2024-12-09 05:25:00.290482] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:30:08.760 [2024-12-09 05:25:00.290501] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:30:08.760 [2024-12-09 05:25:00.290510] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:30:09.018 [2024-12-09 05:25:00.376572] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem mdns0_nvme0 01:30:09.018 [2024-12-09 05:25:00.385464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:09.018 [2024-12-09 05:25:00.385494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x534440 with addr=10.0.0.4, port=8009 01:30:09.018 [2024-12-09 05:25:00.385506] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:30:09.018 [2024-12-09 05:25:00.385510] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:30:09.018 [2024-12-09 05:25:00.385515] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 01:30:09.018 [2024-12-09 05:25:00.430794] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 01:30:09.018 [2024-12-09 05:25:00.431261] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x54ab80:1 started. 01:30:09.018 [2024-12-09 05:25:00.432521] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 01:30:09.018 [2024-12-09 05:25:00.432538] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:30:09.018 [2024-12-09 05:25:00.439166] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x54ab80 was disconnected and freed. delete nvme_qpair. 01:30:09.952 [2024-12-09 05:25:01.385485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:09.952 [2024-12-09 05:25:01.385516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x524e30 with addr=10.0.0.4, port=8009 01:30:09.952 [2024-12-09 05:25:01.385528] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:30:09.952 [2024-12-09 05:25:01.385533] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:30:09.952 [2024-12-09 05:25:01.385538] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 01:30:10.902 [2024-12-09 05:25:02.385475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:10.902 [2024-12-09 05:25:02.385582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5336f0 with addr=10.0.0.4, port=8009 01:30:10.902 [2024-12-09 05:25:02.385596] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:30:10.902 [2024-12-09 05:25:02.385601] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:30:10.902 [2024-12-09 05:25:02.385606] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 01:30:11.467 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 01:30:11.467 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 01:30:11.467 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 01:30:11.467 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 01:30:11.467 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 01:30:11.467 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 01:30:11.467 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 01:30:11.467 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 01:30:11.467 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 01:30:11.467 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:30:11.467 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 01:30:11.467 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 01:30:11.467 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:30:11.467 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:30:11.467 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:30:11.467 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:30:11.728 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:30:11.728 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 01:30:11.728 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:30:11.728 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 01:30:11.728 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 01:30:11.728 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 01:30:11.728 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 01:30:11.728 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:11.728 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:11.728 [2024-12-09 05:25:03.088493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 01:30:11.728 [2024-12-09 05:25:03.089628] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:30:11.728 [2024-12-09 05:25:03.089645] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:30:11.728 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:11.728 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 01:30:11.728 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:11.728 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:11.728 [2024-12-09 05:25:03.096464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 01:30:11.728 [2024-12-09 05:25:03.097622] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:30:11.728 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:11.728 05:25:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 01:30:11.728 [2024-12-09 05:25:03.228751] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:30:11.728 [2024-12-09 05:25:03.228840] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:30:11.728 [2024-12-09 05:25:03.314814] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:30:11.985 [2024-12-09 05:25:03.391477] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 01:30:11.985 [2024-12-09 05:25:03.391548] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 01:30:11.985 [2024-12-09 05:25:03.391583] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 01:30:11.985 [2024-12-09 05:25:03.477553] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns1_nvme0 01:30:11.985 [2024-12-09 05:25:03.531832] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr was created to 10.0.0.4:4420 01:30:11.985 [2024-12-09 05:25:03.532268] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x547960:1 started. 01:30:11.985 [2024-12-09 05:25:03.533406] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns1_nvme0 done 01:30:11.985 [2024-12-09 05:25:03.533492] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 01:30:11.985 [2024-12-09 05:25:03.540273] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x547960 was disconnected and freed. delete nvme_qpair. 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 01:30:12.550 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 01:30:12.550 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 01:30:12.550 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 01:30:12.550 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:30:12.550 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:30:12.550 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:30:12.550 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:12.550 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:12.808 [2024-12-09 05:25:04.184844] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 01:30:12.808 [2024-12-09 05:25:04.184939] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 01:30:12.808 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:30:12.808 cookie is 0 01:30:12.808 is_local: 1 01:30:12.808 our_own: 0 01:30:12.808 wide_area: 0 01:30:12.808 multicast: 1 01:30:12.808 cached: 1 01:30:12.808 [2024-12-09 05:25:04.184949] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:30:12.808 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:30:12.809 [2024-12-09 05:25:04.284846] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 01:30:12.809 [2024-12-09 05:25:04.284861] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 01:30:12.809 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:30:12.809 cookie is 0 01:30:12.809 is_local: 1 01:30:12.809 our_own: 0 01:30:12.809 wide_area: 0 01:30:12.809 multicast: 1 01:30:12.809 cached: 1 01:30:12.809 [2024-12-09 05:25:04.284867] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:12.809 [2024-12-09 05:25:04.381908] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x534cb0:1 started. 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:12.809 05:25:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 01:30:12.809 [2024-12-09 05:25:04.390395] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x534cb0 was disconnected and freed. delete nvme_qpair. 01:30:12.809 [2024-12-09 05:25:04.393930] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x549690:1 started. 01:30:12.809 [2024-12-09 05:25:04.400327] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x549690 was disconnected and freed. delete nvme_qpair. 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:14.185 [2024-12-09 05:25:05.469251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:30:14.185 [2024-12-09 05:25:05.469532] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:30:14.185 [2024-12-09 05:25:05.469551] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:30:14.185 [2024-12-09 05:25:05.469577] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 01:30:14.185 [2024-12-09 05:25:05.469585] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:14.185 [2024-12-09 05:25:05.477226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 01:30:14.185 [2024-12-09 05:25:05.477538] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:30:14.185 [2024-12-09 05:25:05.477569] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:14.185 05:25:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 01:30:14.185 [2024-12-09 05:25:05.610636] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for mdns0_nvme0 01:30:14.185 [2024-12-09 05:25:05.610886] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns1_nvme0 01:30:14.185 [2024-12-09 05:25:05.675925] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 01:30:14.185 [2024-12-09 05:25:05.675958] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 01:30:14.185 [2024-12-09 05:25:05.675964] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:30:14.185 [2024-12-09 05:25:05.675967] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 01:30:14.185 [2024-12-09 05:25:05.675976] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:30:14.185 [2024-12-09 05:25:05.676059] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 2] ctrlr was created to 10.0.0.4:4421 01:30:14.185 [2024-12-09 05:25:05.676075] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns1_nvme0 done 01:30:14.185 [2024-12-09 05:25:05.676078] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 01:30:14.185 [2024-12-09 05:25:05.676081] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 01:30:14.185 [2024-12-09 05:25:05.676088] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 01:30:14.185 [2024-12-09 05:25:05.721716] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:30:14.185 [2024-12-09 05:25:05.721806] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 01:30:14.185 [2024-12-09 05:25:05.721836] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 01:30:14.185 [2024-12-09 05:25:05.721839] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:15.118 [2024-12-09 05:25:06.690544] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:30:15.118 [2024-12-09 05:25:06.690568] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:30:15.118 [2024-12-09 05:25:06.690587] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 01:30:15.118 [2024-12-09 05:25:06.690594] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 01:30:15.118 [2024-12-09 05:25:06.691441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:30:15.118 [2024-12-09 05:25:06.691458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:30:15.118 [2024-12-09 05:25:06.691464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:30:15.118 [2024-12-09 05:25:06.691469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:30:15.118 [2024-12-09 05:25:06.691474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:30:15.118 [2024-12-09 05:25:06.691479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:30:15.118 [2024-12-09 05:25:06.691484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:30:15.118 [2024-12-09 05:25:06.691488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:30:15.118 [2024-12-09 05:25:06.691493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x527160 is same with the state(6) to be set 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:15.118 [2024-12-09 05:25:06.698549] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:30:15.118 [2024-12-09 05:25:06.698580] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 01:30:15.118 [2024-12-09 05:25:06.701413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x527160 (9): Bad file descriptor 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:15.118 05:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 01:30:15.118 [2024-12-09 05:25:06.703493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:30:15.118 [2024-12-09 05:25:06.703512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:30:15.118 [2024-12-09 05:25:06.703518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:30:15.118 [2024-12-09 05:25:06.703522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:30:15.118 [2024-12-09 05:25:06.703528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:30:15.118 [2024-12-09 05:25:06.703532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:30:15.118 [2024-12-09 05:25:06.703537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:30:15.118 [2024-12-09 05:25:06.703541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:30:15.118 [2024-12-09 05:25:06.703546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533ce0 is same with the state(6) to be set 01:30:15.118 [2024-12-09 05:25:06.711427] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:30:15.118 [2024-12-09 05:25:06.711438] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:30:15.118 [2024-12-09 05:25:06.711441] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:30:15.118 [2024-12-09 05:25:06.711444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:30:15.118 [2024-12-09 05:25:06.711463] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:30:15.119 [2024-12-09 05:25:06.711503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.119 [2024-12-09 05:25:06.711512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x527160 with addr=10.0.0.3, port=4420 01:30:15.119 [2024-12-09 05:25:06.711517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x527160 is same with the state(6) to be set 01:30:15.119 [2024-12-09 05:25:06.711525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x527160 (9): Bad file descriptor 01:30:15.119 [2024-12-09 05:25:06.711532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:30:15.119 [2024-12-09 05:25:06.711536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:30:15.119 [2024-12-09 05:25:06.711542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:30:15.119 [2024-12-09 05:25:06.711547] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:30:15.119 [2024-12-09 05:25:06.711550] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:30:15.119 [2024-12-09 05:25:06.711552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:30:15.119 [2024-12-09 05:25:06.713473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x533ce0 (9): Bad file descriptor 01:30:15.119 [2024-12-09 05:25:06.721469] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:30:15.119 [2024-12-09 05:25:06.721481] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:30:15.119 [2024-12-09 05:25:06.721483] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:30:15.119 [2024-12-09 05:25:06.721486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:30:15.119 [2024-12-09 05:25:06.721502] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:30:15.119 [2024-12-09 05:25:06.721529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.119 [2024-12-09 05:25:06.721537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x527160 with addr=10.0.0.3, port=4420 01:30:15.119 [2024-12-09 05:25:06.721541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x527160 is same with the state(6) to be set 01:30:15.119 [2024-12-09 05:25:06.721549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x527160 (9): Bad file descriptor 01:30:15.119 [2024-12-09 05:25:06.721555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:30:15.119 [2024-12-09 05:25:06.721559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:30:15.119 [2024-12-09 05:25:06.721575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:30:15.119 [2024-12-09 05:25:06.721579] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:30:15.119 [2024-12-09 05:25:06.721582] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:30:15.119 [2024-12-09 05:25:06.721584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:30:15.119 [2024-12-09 05:25:06.723478] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 01:30:15.119 [2024-12-09 05:25:06.723487] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 01:30:15.119 [2024-12-09 05:25:06.723490] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 01:30:15.119 [2024-12-09 05:25:06.723492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 01:30:15.119 [2024-12-09 05:25:06.723506] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 01:30:15.119 [2024-12-09 05:25:06.723528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.119 [2024-12-09 05:25:06.723535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x533ce0 with addr=10.0.0.4, port=4420 01:30:15.119 [2024-12-09 05:25:06.723539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533ce0 is same with the state(6) to be set 01:30:15.119 [2024-12-09 05:25:06.723546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x533ce0 (9): Bad file descriptor 01:30:15.119 [2024-12-09 05:25:06.723552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 01:30:15.119 [2024-12-09 05:25:06.723556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 01:30:15.119 [2024-12-09 05:25:06.723568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 01:30:15.119 [2024-12-09 05:25:06.723572] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 01:30:15.119 [2024-12-09 05:25:06.723575] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 01:30:15.119 [2024-12-09 05:25:06.723577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 01:30:15.119 [2024-12-09 05:25:06.731507] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:30:15.119 [2024-12-09 05:25:06.731517] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:30:15.119 [2024-12-09 05:25:06.731520] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:30:15.119 [2024-12-09 05:25:06.731522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:30:15.119 [2024-12-09 05:25:06.731537] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:30:15.119 [2024-12-09 05:25:06.731571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.119 [2024-12-09 05:25:06.731578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x527160 with addr=10.0.0.3, port=4420 01:30:15.119 [2024-12-09 05:25:06.731583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x527160 is same with the state(6) to be set 01:30:15.119 [2024-12-09 05:25:06.731590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x527160 (9): Bad file descriptor 01:30:15.119 [2024-12-09 05:25:06.731596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:30:15.119 [2024-12-09 05:25:06.731600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:30:15.119 [2024-12-09 05:25:06.731604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:30:15.119 [2024-12-09 05:25:06.731608] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:30:15.119 [2024-12-09 05:25:06.731610] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:30:15.119 [2024-12-09 05:25:06.731612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:30:15.381 [2024-12-09 05:25:06.733511] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 01:30:15.381 [2024-12-09 05:25:06.733520] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 01:30:15.381 [2024-12-09 05:25:06.733523] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 01:30:15.381 [2024-12-09 05:25:06.733525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 01:30:15.381 [2024-12-09 05:25:06.733540] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 01:30:15.381 [2024-12-09 05:25:06.733572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.381 [2024-12-09 05:25:06.733580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x533ce0 with addr=10.0.0.4, port=4420 01:30:15.381 [2024-12-09 05:25:06.733584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533ce0 is same with the state(6) to be set 01:30:15.381 [2024-12-09 05:25:06.733591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x533ce0 (9): Bad file descriptor 01:30:15.381 [2024-12-09 05:25:06.733602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 01:30:15.381 [2024-12-09 05:25:06.733606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 01:30:15.381 [2024-12-09 05:25:06.733610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 01:30:15.381 [2024-12-09 05:25:06.733614] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 01:30:15.381 [2024-12-09 05:25:06.733616] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 01:30:15.381 [2024-12-09 05:25:06.733618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 01:30:15.381 [2024-12-09 05:25:06.741543] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:30:15.381 [2024-12-09 05:25:06.741553] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:30:15.381 [2024-12-09 05:25:06.741555] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:30:15.381 [2024-12-09 05:25:06.741557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:30:15.381 [2024-12-09 05:25:06.741585] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:30:15.381 [2024-12-09 05:25:06.741609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.381 [2024-12-09 05:25:06.741616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x527160 with addr=10.0.0.3, port=4420 01:30:15.381 [2024-12-09 05:25:06.741620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x527160 is same with the state(6) to be set 01:30:15.381 [2024-12-09 05:25:06.741627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x527160 (9): Bad file descriptor 01:30:15.381 [2024-12-09 05:25:06.741634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:30:15.381 [2024-12-09 05:25:06.741637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:30:15.381 [2024-12-09 05:25:06.741641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:30:15.381 [2024-12-09 05:25:06.741645] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:30:15.381 [2024-12-09 05:25:06.741648] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:30:15.381 [2024-12-09 05:25:06.741650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:30:15.381 [2024-12-09 05:25:06.743546] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 01:30:15.381 [2024-12-09 05:25:06.743556] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 01:30:15.381 [2024-12-09 05:25:06.743559] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 01:30:15.381 [2024-12-09 05:25:06.743571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 01:30:15.381 [2024-12-09 05:25:06.743585] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 01:30:15.381 [2024-12-09 05:25:06.743607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.381 [2024-12-09 05:25:06.743614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x533ce0 with addr=10.0.0.4, port=4420 01:30:15.381 [2024-12-09 05:25:06.743620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533ce0 is same with the state(6) to be set 01:30:15.381 [2024-12-09 05:25:06.743627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x533ce0 (9): Bad file descriptor 01:30:15.381 [2024-12-09 05:25:06.743634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 01:30:15.381 [2024-12-09 05:25:06.743639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 01:30:15.381 [2024-12-09 05:25:06.743644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 01:30:15.381 [2024-12-09 05:25:06.743649] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 01:30:15.381 [2024-12-09 05:25:06.743652] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 01:30:15.381 [2024-12-09 05:25:06.743655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 01:30:15.381 [2024-12-09 05:25:06.751593] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:30:15.381 [2024-12-09 05:25:06.751671] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:30:15.381 [2024-12-09 05:25:06.751675] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:30:15.381 [2024-12-09 05:25:06.751678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:30:15.381 [2024-12-09 05:25:06.751697] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:30:15.381 [2024-12-09 05:25:06.751736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.381 [2024-12-09 05:25:06.751744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x527160 with addr=10.0.0.3, port=4420 01:30:15.381 [2024-12-09 05:25:06.751749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x527160 is same with the state(6) to be set 01:30:15.381 [2024-12-09 05:25:06.751756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x527160 (9): Bad file descriptor 01:30:15.381 [2024-12-09 05:25:06.751763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:30:15.381 [2024-12-09 05:25:06.751767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:30:15.381 [2024-12-09 05:25:06.751772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:30:15.381 [2024-12-09 05:25:06.751775] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:30:15.381 [2024-12-09 05:25:06.751778] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:30:15.381 [2024-12-09 05:25:06.751780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:30:15.381 [2024-12-09 05:25:06.753591] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 01:30:15.381 [2024-12-09 05:25:06.753601] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 01:30:15.381 [2024-12-09 05:25:06.753603] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 01:30:15.381 [2024-12-09 05:25:06.753606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 01:30:15.381 [2024-12-09 05:25:06.753621] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 01:30:15.382 [2024-12-09 05:25:06.753643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.382 [2024-12-09 05:25:06.753650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x533ce0 with addr=10.0.0.4, port=4420 01:30:15.382 [2024-12-09 05:25:06.753654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533ce0 is same with the state(6) to be set 01:30:15.382 [2024-12-09 05:25:06.753661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x533ce0 (9): Bad file descriptor 01:30:15.382 [2024-12-09 05:25:06.753667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 01:30:15.382 [2024-12-09 05:25:06.753671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 01:30:15.382 [2024-12-09 05:25:06.753675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 01:30:15.382 [2024-12-09 05:25:06.753679] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 01:30:15.382 [2024-12-09 05:25:06.753681] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 01:30:15.382 [2024-12-09 05:25:06.753684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 01:30:15.382 [2024-12-09 05:25:06.761709] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:30:15.382 [2024-12-09 05:25:06.761719] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:30:15.382 [2024-12-09 05:25:06.761721] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:30:15.382 [2024-12-09 05:25:06.761724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:30:15.382 [2024-12-09 05:25:06.761739] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:30:15.382 [2024-12-09 05:25:06.761761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.382 [2024-12-09 05:25:06.761767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x527160 with addr=10.0.0.3, port=4420 01:30:15.382 [2024-12-09 05:25:06.761772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x527160 is same with the state(6) to be set 01:30:15.382 [2024-12-09 05:25:06.761779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x527160 (9): Bad file descriptor 01:30:15.382 [2024-12-09 05:25:06.761785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:30:15.382 [2024-12-09 05:25:06.761789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:30:15.382 [2024-12-09 05:25:06.761793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:30:15.382 [2024-12-09 05:25:06.761796] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:30:15.382 [2024-12-09 05:25:06.761799] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:30:15.382 [2024-12-09 05:25:06.761801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:30:15.382 [2024-12-09 05:25:06.763628] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 01:30:15.382 [2024-12-09 05:25:06.763637] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 01:30:15.382 [2024-12-09 05:25:06.763639] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 01:30:15.382 [2024-12-09 05:25:06.763641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 01:30:15.382 [2024-12-09 05:25:06.763655] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 01:30:15.382 [2024-12-09 05:25:06.763675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.382 [2024-12-09 05:25:06.763682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x533ce0 with addr=10.0.0.4, port=4420 01:30:15.382 [2024-12-09 05:25:06.763687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533ce0 is same with the state(6) to be set 01:30:15.382 [2024-12-09 05:25:06.763693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x533ce0 (9): Bad file descriptor 01:30:15.382 [2024-12-09 05:25:06.763699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 01:30:15.382 [2024-12-09 05:25:06.763703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 01:30:15.382 [2024-12-09 05:25:06.763707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 01:30:15.382 [2024-12-09 05:25:06.763711] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 01:30:15.382 [2024-12-09 05:25:06.763713] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 01:30:15.382 [2024-12-09 05:25:06.763716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 01:30:15.382 [2024-12-09 05:25:06.771746] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:30:15.382 [2024-12-09 05:25:06.771755] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:30:15.382 [2024-12-09 05:25:06.771757] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:30:15.382 [2024-12-09 05:25:06.771760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:30:15.382 [2024-12-09 05:25:06.771775] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:30:15.382 [2024-12-09 05:25:06.771796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.382 [2024-12-09 05:25:06.771802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x527160 with addr=10.0.0.3, port=4420 01:30:15.382 [2024-12-09 05:25:06.771806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x527160 is same with the state(6) to be set 01:30:15.382 [2024-12-09 05:25:06.771813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x527160 (9): Bad file descriptor 01:30:15.382 [2024-12-09 05:25:06.771819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:30:15.382 [2024-12-09 05:25:06.771823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:30:15.382 [2024-12-09 05:25:06.771827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:30:15.382 [2024-12-09 05:25:06.771831] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:30:15.382 [2024-12-09 05:25:06.771833] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:30:15.382 [2024-12-09 05:25:06.771835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:30:15.382 [2024-12-09 05:25:06.773661] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 01:30:15.382 [2024-12-09 05:25:06.773671] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 01:30:15.382 [2024-12-09 05:25:06.773673] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 01:30:15.382 [2024-12-09 05:25:06.773676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 01:30:15.382 [2024-12-09 05:25:06.773689] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 01:30:15.382 [2024-12-09 05:25:06.773710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.382 [2024-12-09 05:25:06.773716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x533ce0 with addr=10.0.0.4, port=4420 01:30:15.382 [2024-12-09 05:25:06.773721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533ce0 is same with the state(6) to be set 01:30:15.382 [2024-12-09 05:25:06.773727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x533ce0 (9): Bad file descriptor 01:30:15.382 [2024-12-09 05:25:06.773743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 01:30:15.382 [2024-12-09 05:25:06.773747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 01:30:15.382 [2024-12-09 05:25:06.773751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 01:30:15.382 [2024-12-09 05:25:06.773755] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 01:30:15.382 [2024-12-09 05:25:06.773757] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 01:30:15.382 [2024-12-09 05:25:06.773759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 01:30:15.382 [2024-12-09 05:25:06.781781] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:30:15.382 [2024-12-09 05:25:06.781790] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:30:15.382 [2024-12-09 05:25:06.781792] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:30:15.382 [2024-12-09 05:25:06.781795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:30:15.382 [2024-12-09 05:25:06.781810] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:30:15.382 [2024-12-09 05:25:06.781830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.382 [2024-12-09 05:25:06.781837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x527160 with addr=10.0.0.3, port=4420 01:30:15.382 [2024-12-09 05:25:06.781841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x527160 is same with the state(6) to be set 01:30:15.382 [2024-12-09 05:25:06.781848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x527160 (9): Bad file descriptor 01:30:15.382 [2024-12-09 05:25:06.781854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:30:15.382 [2024-12-09 05:25:06.781858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:30:15.382 [2024-12-09 05:25:06.781862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:30:15.382 [2024-12-09 05:25:06.781865] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:30:15.382 [2024-12-09 05:25:06.781868] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:30:15.382 [2024-12-09 05:25:06.781870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:30:15.382 [2024-12-09 05:25:06.783696] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 01:30:15.382 [2024-12-09 05:25:06.783705] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 01:30:15.383 [2024-12-09 05:25:06.783707] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 01:30:15.383 [2024-12-09 05:25:06.783709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 01:30:15.383 [2024-12-09 05:25:06.783723] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 01:30:15.383 [2024-12-09 05:25:06.783744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.383 [2024-12-09 05:25:06.783751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x533ce0 with addr=10.0.0.4, port=4420 01:30:15.383 [2024-12-09 05:25:06.783755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533ce0 is same with the state(6) to be set 01:30:15.383 [2024-12-09 05:25:06.783762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x533ce0 (9): Bad file descriptor 01:30:15.383 [2024-12-09 05:25:06.783776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 01:30:15.383 [2024-12-09 05:25:06.783779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 01:30:15.383 [2024-12-09 05:25:06.783784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 01:30:15.383 [2024-12-09 05:25:06.783787] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 01:30:15.383 [2024-12-09 05:25:06.783790] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 01:30:15.383 [2024-12-09 05:25:06.783792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 01:30:15.383 [2024-12-09 05:25:06.791817] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:30:15.383 [2024-12-09 05:25:06.791829] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:30:15.383 [2024-12-09 05:25:06.791832] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:30:15.383 [2024-12-09 05:25:06.791834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:30:15.383 [2024-12-09 05:25:06.791850] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:30:15.383 [2024-12-09 05:25:06.791875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.383 [2024-12-09 05:25:06.791883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x527160 with addr=10.0.0.3, port=4420 01:30:15.383 [2024-12-09 05:25:06.791887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x527160 is same with the state(6) to be set 01:30:15.383 [2024-12-09 05:25:06.791894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x527160 (9): Bad file descriptor 01:30:15.383 [2024-12-09 05:25:06.791901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:30:15.383 [2024-12-09 05:25:06.791904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:30:15.383 [2024-12-09 05:25:06.791909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:30:15.383 [2024-12-09 05:25:06.791912] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:30:15.383 [2024-12-09 05:25:06.791915] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:30:15.383 [2024-12-09 05:25:06.791917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:30:15.383 [2024-12-09 05:25:06.793730] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 01:30:15.383 [2024-12-09 05:25:06.793740] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 01:30:15.383 [2024-12-09 05:25:06.793742] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 01:30:15.383 [2024-12-09 05:25:06.793745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 01:30:15.383 [2024-12-09 05:25:06.793758] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 01:30:15.383 [2024-12-09 05:25:06.793779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.383 [2024-12-09 05:25:06.793786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x533ce0 with addr=10.0.0.4, port=4420 01:30:15.383 [2024-12-09 05:25:06.793790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533ce0 is same with the state(6) to be set 01:30:15.383 [2024-12-09 05:25:06.793797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x533ce0 (9): Bad file descriptor 01:30:15.383 [2024-12-09 05:25:06.793813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 01:30:15.383 [2024-12-09 05:25:06.793816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 01:30:15.383 [2024-12-09 05:25:06.793821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 01:30:15.383 [2024-12-09 05:25:06.793824] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 01:30:15.383 [2024-12-09 05:25:06.793827] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 01:30:15.383 [2024-12-09 05:25:06.793829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 01:30:15.383 [2024-12-09 05:25:06.801856] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:30:15.383 [2024-12-09 05:25:06.801865] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:30:15.383 [2024-12-09 05:25:06.801868] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:30:15.383 [2024-12-09 05:25:06.801870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:30:15.383 [2024-12-09 05:25:06.801886] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:30:15.383 [2024-12-09 05:25:06.801907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.383 [2024-12-09 05:25:06.801914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x527160 with addr=10.0.0.3, port=4420 01:30:15.383 [2024-12-09 05:25:06.801918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x527160 is same with the state(6) to be set 01:30:15.383 [2024-12-09 05:25:06.801925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x527160 (9): Bad file descriptor 01:30:15.383 [2024-12-09 05:25:06.801931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:30:15.383 [2024-12-09 05:25:06.801934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:30:15.383 [2024-12-09 05:25:06.801939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:30:15.383 [2024-12-09 05:25:06.801942] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:30:15.383 [2024-12-09 05:25:06.801945] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:30:15.383 [2024-12-09 05:25:06.801947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:30:15.383 [2024-12-09 05:25:06.803765] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 01:30:15.383 [2024-12-09 05:25:06.803773] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 01:30:15.383 [2024-12-09 05:25:06.803776] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 01:30:15.383 [2024-12-09 05:25:06.803778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 01:30:15.383 [2024-12-09 05:25:06.803793] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 01:30:15.383 [2024-12-09 05:25:06.803813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.383 [2024-12-09 05:25:06.803819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x533ce0 with addr=10.0.0.4, port=4420 01:30:15.383 [2024-12-09 05:25:06.803823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533ce0 is same with the state(6) to be set 01:30:15.383 [2024-12-09 05:25:06.803830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x533ce0 (9): Bad file descriptor 01:30:15.383 [2024-12-09 05:25:06.803844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 01:30:15.383 [2024-12-09 05:25:06.803847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 01:30:15.383 [2024-12-09 05:25:06.803852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 01:30:15.383 [2024-12-09 05:25:06.803855] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 01:30:15.383 [2024-12-09 05:25:06.803858] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 01:30:15.383 [2024-12-09 05:25:06.803860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 01:30:15.383 [2024-12-09 05:25:06.811891] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:30:15.383 [2024-12-09 05:25:06.811901] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:30:15.383 [2024-12-09 05:25:06.811903] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:30:15.383 [2024-12-09 05:25:06.811905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:30:15.383 [2024-12-09 05:25:06.811920] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:30:15.383 [2024-12-09 05:25:06.811941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.383 [2024-12-09 05:25:06.811948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x527160 with addr=10.0.0.3, port=4420 01:30:15.383 [2024-12-09 05:25:06.811952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x527160 is same with the state(6) to be set 01:30:15.383 [2024-12-09 05:25:06.811959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x527160 (9): Bad file descriptor 01:30:15.383 [2024-12-09 05:25:06.811965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:30:15.383 [2024-12-09 05:25:06.811969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:30:15.383 [2024-12-09 05:25:06.811973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:30:15.383 [2024-12-09 05:25:06.811977] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:30:15.383 [2024-12-09 05:25:06.811979] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:30:15.384 [2024-12-09 05:25:06.811981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:30:15.384 [2024-12-09 05:25:06.813798] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 01:30:15.384 [2024-12-09 05:25:06.813807] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 01:30:15.384 [2024-12-09 05:25:06.813809] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 01:30:15.384 [2024-12-09 05:25:06.813812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 01:30:15.384 [2024-12-09 05:25:06.813825] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 01:30:15.384 [2024-12-09 05:25:06.813845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.384 [2024-12-09 05:25:06.813852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x533ce0 with addr=10.0.0.4, port=4420 01:30:15.384 [2024-12-09 05:25:06.813856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533ce0 is same with the state(6) to be set 01:30:15.384 [2024-12-09 05:25:06.813863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x533ce0 (9): Bad file descriptor 01:30:15.384 [2024-12-09 05:25:06.813877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 01:30:15.384 [2024-12-09 05:25:06.813881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 01:30:15.384 [2024-12-09 05:25:06.813885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 01:30:15.384 [2024-12-09 05:25:06.813888] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 01:30:15.384 [2024-12-09 05:25:06.813891] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 01:30:15.384 [2024-12-09 05:25:06.813893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 01:30:15.384 [2024-12-09 05:25:06.821927] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:30:15.384 [2024-12-09 05:25:06.821936] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:30:15.384 [2024-12-09 05:25:06.821939] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:30:15.384 [2024-12-09 05:25:06.821941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:30:15.384 [2024-12-09 05:25:06.821956] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:30:15.384 [2024-12-09 05:25:06.821977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.384 [2024-12-09 05:25:06.821984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x527160 with addr=10.0.0.3, port=4420 01:30:15.384 [2024-12-09 05:25:06.821988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x527160 is same with the state(6) to be set 01:30:15.384 [2024-12-09 05:25:06.821995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x527160 (9): Bad file descriptor 01:30:15.384 [2024-12-09 05:25:06.822001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:30:15.384 [2024-12-09 05:25:06.822005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:30:15.384 [2024-12-09 05:25:06.822009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:30:15.384 [2024-12-09 05:25:06.822012] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:30:15.384 [2024-12-09 05:25:06.822015] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:30:15.384 [2024-12-09 05:25:06.822017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:30:15.384 [2024-12-09 05:25:06.823830] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 01:30:15.384 [2024-12-09 05:25:06.823839] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 01:30:15.384 [2024-12-09 05:25:06.823842] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 01:30:15.384 [2024-12-09 05:25:06.823844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 01:30:15.384 [2024-12-09 05:25:06.823858] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 01:30:15.384 [2024-12-09 05:25:06.823878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:30:15.384 [2024-12-09 05:25:06.823884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x533ce0 with addr=10.0.0.4, port=4420 01:30:15.384 [2024-12-09 05:25:06.823888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533ce0 is same with the state(6) to be set 01:30:15.384 [2024-12-09 05:25:06.823895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x533ce0 (9): Bad file descriptor 01:30:15.384 [2024-12-09 05:25:06.823909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 01:30:15.384 [2024-12-09 05:25:06.823913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 01:30:15.384 [2024-12-09 05:25:06.823917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 01:30:15.384 [2024-12-09 05:25:06.823920] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 01:30:15.384 [2024-12-09 05:25:06.823923] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 01:30:15.384 [2024-12-09 05:25:06.823925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 01:30:15.384 [2024-12-09 05:25:06.829912] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 01:30:15.384 [2024-12-09 05:25:06.829926] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 01:30:15.384 [2024-12-09 05:25:06.829936] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:30:15.384 [2024-12-09 05:25:06.829952] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 01:30:15.384 [2024-12-09 05:25:06.829960] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 01:30:15.384 [2024-12-09 05:25:06.829967] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 01:30:15.384 [2024-12-09 05:25:06.915974] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 01:30:15.384 [2024-12-09 05:25:06.916002] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 01:30:16.331 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 01:30:16.331 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:30:16.331 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:16.331 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:16.331 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:30:16.331 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:30:16.331 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:30:16.331 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:16.331 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 01:30:16.331 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 01:30:16.331 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:30:16.331 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:30:16.331 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:30:16.331 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:16.331 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:16.331 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:30:16.331 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:16.331 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 01:30:16.331 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 01:30:16.331 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:30:16.331 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:16.332 05:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 01:30:16.332 [2024-12-09 05:25:07.884877] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 01:30:17.281 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 01:30:17.281 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 01:30:17.281 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:17.281 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:17.281 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 01:30:17.281 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 01:30:17.281 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 01:30:17.281 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:17.539 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:17.540 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 01:30:17.540 05:25:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:17.540 [2024-12-09 05:25:09.029872] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 01:30:17.540 2024/12/09 05:25:09 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 01:30:17.540 request: 01:30:17.540 { 01:30:17.540 "method": "bdev_nvme_start_mdns_discovery", 01:30:17.540 "params": { 01:30:17.540 "name": "mdns", 01:30:17.540 "svcname": "_nvme-disc._http", 01:30:17.540 "hostnqn": "nqn.2021-12.io.spdk:test" 01:30:17.540 } 01:30:17.540 } 01:30:17.540 Got JSON-RPC error response 01:30:17.540 GoRPCClient: error on JSON-RPC call 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:30:17.540 05:25:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 01:30:18.105 [2024-12-09 05:25:09.618389] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 01:30:18.105 [2024-12-09 05:25:09.718386] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 01:30:18.364 [2024-12-09 05:25:09.818392] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 01:30:18.364 [2024-12-09 05:25:09.818404] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 01:30:18.364 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:30:18.364 cookie is 0 01:30:18.364 is_local: 1 01:30:18.364 our_own: 0 01:30:18.364 wide_area: 0 01:30:18.364 multicast: 1 01:30:18.364 cached: 1 01:30:18.364 [2024-12-09 05:25:09.918395] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 01:30:18.364 [2024-12-09 05:25:09.918413] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 01:30:18.364 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:30:18.364 cookie is 0 01:30:18.364 is_local: 1 01:30:18.364 our_own: 0 01:30:18.364 wide_area: 0 01:30:18.364 multicast: 1 01:30:18.364 cached: 1 01:30:18.364 [2024-12-09 05:25:09.918418] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 01:30:18.622 [2024-12-09 05:25:10.018399] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 01:30:18.622 [2024-12-09 05:25:10.018414] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 01:30:18.622 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:30:18.622 cookie is 0 01:30:18.622 is_local: 1 01:30:18.622 our_own: 0 01:30:18.622 wide_area: 0 01:30:18.622 multicast: 1 01:30:18.622 cached: 1 01:30:18.622 [2024-12-09 05:25:10.118404] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 01:30:18.622 [2024-12-09 05:25:10.118431] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 01:30:18.622 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:30:18.622 cookie is 0 01:30:18.622 is_local: 1 01:30:18.622 our_own: 0 01:30:18.622 wide_area: 0 01:30:18.622 multicast: 1 01:30:18.622 cached: 1 01:30:18.622 [2024-12-09 05:25:10.118437] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 01:30:19.552 [2024-12-09 05:25:10.822461] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 01:30:19.552 [2024-12-09 05:25:10.822483] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 01:30:19.552 [2024-12-09 05:25:10.822492] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 01:30:19.552 [2024-12-09 05:25:10.908545] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 01:30:19.552 [2024-12-09 05:25:10.966762] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] ctrlr was created to 10.0.0.4:4421 01:30:19.552 [2024-12-09 05:25:10.967124] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] Connecting qpair 0x546ca0:1 started. 01:30:19.552 [2024-12-09 05:25:10.968197] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 01:30:19.552 [2024-12-09 05:25:10.968217] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 01:30:19.552 [2024-12-09 05:25:10.971349] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] qpair 0x546ca0 was disconnected and freed. delete nvme_qpair. 01:30:19.552 [2024-12-09 05:25:11.022330] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:30:19.552 [2024-12-09 05:25:11.022345] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:30:19.552 [2024-12-09 05:25:11.022354] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:30:19.552 [2024-12-09 05:25:11.108406] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem mdns1_nvme0 01:30:19.552 [2024-12-09 05:25:11.166610] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 01:30:19.552 [2024-12-09 05:25:11.166923] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x52e5b0:1 started. 01:30:19.809 [2024-12-09 05:25:11.167918] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 01:30:19.809 [2024-12-09 05:25:11.167936] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 01:30:19.809 [2024-12-09 05:25:11.171328] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x52e5b0 was disconnected and freed. delete nvme_qpair. 01:30:23.086 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 01:30:23.086 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 01:30:23.086 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 01:30:23.086 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 01:30:23.086 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 01:30:23.086 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:23.086 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:23.086 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:23.086 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 01:30:23.086 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 01:30:23.086 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 01:30:23.086 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:30:23.086 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 01:30:23.086 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:23.086 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:23.086 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 01:30:23.086 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:23.087 [2024-12-09 05:25:14.152100] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 01:30:23.087 2024/12/09 05:25:14 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 01:30:23.087 request: 01:30:23.087 { 01:30:23.087 "method": "bdev_nvme_start_mdns_discovery", 01:30:23.087 "params": { 01:30:23.087 "name": "cdc", 01:30:23.087 "svcname": "_nvme-disc._tcp", 01:30:23.087 "hostnqn": "nqn.2021-12.io.spdk:test" 01:30:23.087 } 01:30:23.087 } 01:30:23.087 Got JSON-RPC error response 01:30:23.087 GoRPCClient: error on JSON-RPC call 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 found 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 01:30:23.087 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 01:30:23.087 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 01:30:23.087 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 01:30:23.087 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:30:23.087 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:30:23.087 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:30:23.087 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:23.087 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:23.088 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:23.088 05:25:14 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 01:30:23.088 [2024-12-09 05:25:14.318434] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 01:30:24.018 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 'not found' 01:30:24.018 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 01:30:24.018 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 01:30:24.018 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 01:30:24.018 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 01:30:24.018 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 01:30:24.019 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 01:30:24.019 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:30:24.019 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 94246 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 94246 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 94276 01:30:24.019 Got SIGTERM, quitting. 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 01:30:24.019 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 01:30:24.019 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 01:30:24.019 avahi-daemon 0.8 exiting. 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:30:24.019 rmmod nvme_tcp 01:30:24.019 rmmod nvme_fabrics 01:30:24.019 rmmod nvme_keyring 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@517 -- # '[' -n 94196 ']' 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@518 -- # killprocess 94196 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # '[' -z 94196 ']' 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # kill -0 94196 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # uname 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94196 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:30:24.019 killing process with pid 94196 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94196' 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@973 -- # kill 94196 01:30:24.019 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@978 -- # wait 94196 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@297 -- # iptr 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-save 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-restore 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@300 -- # return 0 01:30:24.277 01:30:24.277 real 0m21.725s 01:30:24.277 user 0m41.747s 01:30:24.277 sys 0m1.658s 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 01:30:24.277 05:25:15 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:30:24.277 ************************************ 01:30:24.277 END TEST nvmf_mdns_discovery 01:30:24.277 ************************************ 01:30:24.536 05:25:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 01:30:24.536 05:25:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 01:30:24.536 05:25:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:30:24.536 05:25:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:30:24.536 05:25:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:30:24.536 ************************************ 01:30:24.536 START TEST nvmf_host_multipath 01:30:24.536 ************************************ 01:30:24.536 05:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 01:30:24.536 * Looking for test storage... 01:30:24.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:30:24.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:24.536 --rc genhtml_branch_coverage=1 01:30:24.536 --rc genhtml_function_coverage=1 01:30:24.536 --rc genhtml_legend=1 01:30:24.536 --rc geninfo_all_blocks=1 01:30:24.536 --rc geninfo_unexecuted_blocks=1 01:30:24.536 01:30:24.536 ' 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:30:24.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:24.536 --rc genhtml_branch_coverage=1 01:30:24.536 --rc genhtml_function_coverage=1 01:30:24.536 --rc genhtml_legend=1 01:30:24.536 --rc geninfo_all_blocks=1 01:30:24.536 --rc geninfo_unexecuted_blocks=1 01:30:24.536 01:30:24.536 ' 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:30:24.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:24.536 --rc genhtml_branch_coverage=1 01:30:24.536 --rc genhtml_function_coverage=1 01:30:24.536 --rc genhtml_legend=1 01:30:24.536 --rc geninfo_all_blocks=1 01:30:24.536 --rc geninfo_unexecuted_blocks=1 01:30:24.536 01:30:24.536 ' 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:30:24.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:24.536 --rc genhtml_branch_coverage=1 01:30:24.536 --rc genhtml_function_coverage=1 01:30:24.536 --rc genhtml_legend=1 01:30:24.536 --rc geninfo_all_blocks=1 01:30:24.536 --rc geninfo_unexecuted_blocks=1 01:30:24.536 01:30:24.536 ' 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:30:24.536 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:30:24.537 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:30:24.537 Cannot find device "nvmf_init_br" 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:30:24.537 Cannot find device "nvmf_init_br2" 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 01:30:24.537 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:30:24.796 Cannot find device "nvmf_tgt_br" 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:30:24.796 Cannot find device "nvmf_tgt_br2" 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:30:24.796 Cannot find device "nvmf_init_br" 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:30:24.796 Cannot find device "nvmf_init_br2" 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:30:24.796 Cannot find device "nvmf_tgt_br" 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:30:24.796 Cannot find device "nvmf_tgt_br2" 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:30:24.796 Cannot find device "nvmf_br" 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:30:24.796 Cannot find device "nvmf_init_if" 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:30:24.796 Cannot find device "nvmf_init_if2" 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:30:24.796 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:30:24.796 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:30:24.796 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:30:25.055 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:30:25.055 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 01:30:25.055 01:30:25.055 --- 10.0.0.3 ping statistics --- 01:30:25.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:30:25.055 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:30:25.055 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:30:25.055 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 01:30:25.055 01:30:25.055 --- 10.0.0.4 ping statistics --- 01:30:25.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:30:25.055 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:30:25.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:30:25.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 01:30:25.055 01:30:25.055 --- 10.0.0.1 ping statistics --- 01:30:25.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:30:25.055 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:30:25.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:30:25.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 01:30:25.055 01:30:25.055 --- 10.0.0.2 ping statistics --- 01:30:25.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:30:25.055 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=94914 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 94914 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 94914 ']' 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:30:25.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 01:30:25.055 05:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:30:25.055 [2024-12-09 05:25:16.513246] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:30:25.055 [2024-12-09 05:25:16.513301] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:30:25.055 [2024-12-09 05:25:16.647460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:30:25.313 [2024-12-09 05:25:16.683288] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:30:25.313 [2024-12-09 05:25:16.683327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:30:25.313 [2024-12-09 05:25:16.683333] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:30:25.313 [2024-12-09 05:25:16.683337] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:30:25.313 [2024-12-09 05:25:16.683342] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:30:25.313 [2024-12-09 05:25:16.684237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:30:25.313 [2024-12-09 05:25:16.684302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:30:25.879 05:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:30:25.879 05:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 01:30:25.879 05:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:30:25.879 05:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 01:30:25.879 05:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:30:25.879 05:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:30:25.879 05:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=94914 01:30:25.879 05:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:30:26.137 [2024-12-09 05:25:17.594204] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:30:26.137 05:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:30:26.396 Malloc0 01:30:26.396 05:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 01:30:26.653 05:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:30:26.653 05:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:30:26.911 [2024-12-09 05:25:18.419595] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:30:26.911 05:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:30:27.169 [2024-12-09 05:25:18.583649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:30:27.169 05:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95012 01:30:27.169 05:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:30:27.169 05:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 01:30:27.169 05:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95012 /var/tmp/bdevperf.sock 01:30:27.169 05:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 95012 ']' 01:30:27.169 05:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:30:27.169 05:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 01:30:27.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:30:27.169 05:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:30:27.169 05:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 01:30:27.169 05:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:30:28.102 05:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:30:28.102 05:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 01:30:28.102 05:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:30:28.102 05:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:30:28.360 Nvme0n1 01:30:28.617 05:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:30:28.875 Nvme0n1 01:30:28.875 05:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 01:30:28.875 05:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 01:30:29.809 05:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 01:30:29.809 05:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:30:30.066 05:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:30:30.324 05:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 01:30:30.324 05:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95099 01:30:30.324 05:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:30:30.324 05:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94914 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:30:36.920 05:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:30:36.920 05:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:30:36.920 05:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:30:36.921 05:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:30:36.921 Attaching 4 probes... 01:30:36.921 @path[10.0.0.3, 4421]: 26162 01:30:36.921 @path[10.0.0.3, 4421]: 26730 01:30:36.921 @path[10.0.0.3, 4421]: 26432 01:30:36.921 @path[10.0.0.3, 4421]: 26628 01:30:36.921 @path[10.0.0.3, 4421]: 26379 01:30:36.921 05:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:30:36.921 05:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:30:36.921 05:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:30:36.921 05:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:30:36.921 05:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:30:36.921 05:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:30:36.921 05:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95099 01:30:36.921 05:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:30:36.921 05:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 01:30:36.921 05:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:30:36.921 05:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 01:30:36.921 05:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 01:30:36.921 05:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95242 01:30:36.921 05:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:30:36.921 05:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94914 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:30:43.476 05:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:30:43.476 05:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 01:30:43.476 05:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 01:30:43.476 05:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:30:43.476 Attaching 4 probes... 01:30:43.476 @path[10.0.0.3, 4420]: 24819 01:30:43.476 @path[10.0.0.3, 4420]: 25172 01:30:43.476 @path[10.0.0.3, 4420]: 24966 01:30:43.476 @path[10.0.0.3, 4420]: 25255 01:30:43.476 @path[10.0.0.3, 4420]: 25383 01:30:43.476 05:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:30:43.476 05:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:30:43.476 05:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:30:43.476 05:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 01:30:43.476 05:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 01:30:43.476 05:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 01:30:43.476 05:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95242 01:30:43.476 05:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:30:43.476 05:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 01:30:43.476 05:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:30:43.476 05:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:30:43.476 05:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 01:30:43.476 05:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94914 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:30:43.476 05:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95372 01:30:43.476 05:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:30:50.048 05:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:30:50.048 05:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:30:50.048 05:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:30:50.048 05:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:30:50.048 Attaching 4 probes... 01:30:50.048 @path[10.0.0.3, 4421]: 14749 01:30:50.048 @path[10.0.0.3, 4421]: 25932 01:30:50.048 @path[10.0.0.3, 4421]: 26139 01:30:50.048 @path[10.0.0.3, 4421]: 26223 01:30:50.048 @path[10.0.0.3, 4421]: 25782 01:30:50.048 05:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:30:50.048 05:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:30:50.048 05:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:30:50.048 05:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:30:50.048 05:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:30:50.048 05:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:30:50.048 05:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95372 01:30:50.048 05:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:30:50.048 05:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 01:30:50.048 05:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:30:50.048 05:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 01:30:50.048 05:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 01:30:50.048 05:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95508 01:30:50.048 05:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:30:50.048 05:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94914 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:30:56.606 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:30:56.606 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 01:30:56.606 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 01:30:56.606 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:30:56.606 Attaching 4 probes... 01:30:56.606 01:30:56.606 01:30:56.606 01:30:56.606 01:30:56.606 01:30:56.606 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:30:56.606 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:30:56.606 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:30:56.606 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 01:30:56.606 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 01:30:56.606 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 01:30:56.606 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95508 01:30:56.606 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:30:56.606 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 01:30:56.606 05:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:30:56.606 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:30:56.606 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 01:30:56.607 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94914 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:30:56.607 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95643 01:30:56.607 05:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:31:03.188 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:31:03.188 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:31:03.188 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:31:03.188 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:31:03.188 Attaching 4 probes... 01:31:03.188 @path[10.0.0.3, 4421]: 25586 01:31:03.188 @path[10.0.0.3, 4421]: 25991 01:31:03.188 @path[10.0.0.3, 4421]: 25511 01:31:03.188 @path[10.0.0.3, 4421]: 26207 01:31:03.188 @path[10.0.0.3, 4421]: 26251 01:31:03.188 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:31:03.188 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:31:03.188 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:31:03.188 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:31:03.188 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:31:03.188 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:31:03.188 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95643 01:31:03.188 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:31:03.188 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:31:03.188 [2024-12-09 05:25:54.626610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5eb0 is same with the state(6) to be set 01:31:03.188 [2024-12-09 05:25:54.626646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5eb0 is same with the state(6) to be set 01:31:03.188 [2024-12-09 05:25:54.626651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5eb0 is same with the state(6) to be set 01:31:03.188 [2024-12-09 05:25:54.626655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5eb0 is same with the state(6) to be set 01:31:03.188 [2024-12-09 05:25:54.626659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5eb0 is same with the state(6) to be set 01:31:03.188 [2024-12-09 05:25:54.626663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5eb0 is same with the state(6) to be set 01:31:03.188 [2024-12-09 05:25:54.626667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5eb0 is same with the state(6) to be set 01:31:03.188 [2024-12-09 05:25:54.626672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5eb0 is same with the state(6) to be set 01:31:03.188 [2024-12-09 05:25:54.626676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5eb0 is same with the state(6) to be set 01:31:03.188 [2024-12-09 05:25:54.626679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5eb0 is same with the state(6) to be set 01:31:03.188 [2024-12-09 05:25:54.626683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5eb0 is same with the state(6) to be set 01:31:03.188 [2024-12-09 05:25:54.626687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5eb0 is same with the state(6) to be set 01:31:03.188 [2024-12-09 05:25:54.626691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5eb0 is same with the state(6) to be set 01:31:03.188 [2024-12-09 05:25:54.626694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5eb0 is same with the state(6) to be set 01:31:03.188 05:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 01:31:04.121 05:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 01:31:04.121 05:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95778 01:31:04.121 05:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:31:04.121 05:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94914 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:31:10.677 05:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:31:10.677 05:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 01:31:10.677 05:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 01:31:10.677 05:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:31:10.677 Attaching 4 probes... 01:31:10.677 @path[10.0.0.3, 4420]: 24368 01:31:10.677 @path[10.0.0.3, 4420]: 24836 01:31:10.677 @path[10.0.0.3, 4420]: 24278 01:31:10.677 @path[10.0.0.3, 4420]: 24536 01:31:10.677 @path[10.0.0.3, 4420]: 24453 01:31:10.677 05:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:31:10.677 05:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:31:10.677 05:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:31:10.677 05:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 01:31:10.677 05:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 01:31:10.677 05:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 01:31:10.677 05:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95778 01:31:10.677 05:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:31:10.677 05:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:31:10.677 [2024-12-09 05:26:02.068296] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:31:10.678 05:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:31:10.678 05:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 01:31:17.231 05:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 01:31:17.231 05:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95979 01:31:17.231 05:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:31:17.231 05:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94914 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:31:23.820 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:31:23.820 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:31:23.820 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:31:23.820 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:31:23.820 Attaching 4 probes... 01:31:23.820 @path[10.0.0.3, 4421]: 24864 01:31:23.820 @path[10.0.0.3, 4421]: 25789 01:31:23.820 @path[10.0.0.3, 4421]: 25473 01:31:23.820 @path[10.0.0.3, 4421]: 25498 01:31:23.820 @path[10.0.0.3, 4421]: 25500 01:31:23.820 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:31:23.820 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:31:23.820 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:31:23.820 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:31:23.820 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:31:23.820 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:31:23.820 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95979 01:31:23.820 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:31:23.820 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95012 01:31:23.820 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 95012 ']' 01:31:23.820 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 95012 01:31:23.820 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 01:31:23.820 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:31:23.820 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95012 01:31:23.820 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:31:23.820 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:31:23.820 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95012' 01:31:23.820 killing process with pid 95012 01:31:23.821 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 95012 01:31:23.821 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 95012 01:31:23.821 { 01:31:23.821 "results": [ 01:31:23.821 { 01:31:23.821 "job": "Nvme0n1", 01:31:23.821 "core_mask": "0x4", 01:31:23.821 "workload": "verify", 01:31:23.821 "status": "terminated", 01:31:23.821 "verify_range": { 01:31:23.821 "start": 0, 01:31:23.821 "length": 16384 01:31:23.821 }, 01:31:23.821 "queue_depth": 128, 01:31:23.821 "io_size": 4096, 01:31:23.821 "runtime": 54.131446, 01:31:23.821 "iops": 10834.275515196841, 01:31:23.821 "mibps": 42.32138873123766, 01:31:23.821 "io_failed": 0, 01:31:23.821 "io_timeout": 0, 01:31:23.821 "avg_latency_us": 11791.82226750042, 01:31:23.821 "min_latency_us": 237.8830769230769, 01:31:23.821 "max_latency_us": 7020619.618461538 01:31:23.821 } 01:31:23.821 ], 01:31:23.821 "core_count": 1 01:31:23.821 } 01:31:23.821 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95012 01:31:23.821 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:31:23.821 [2024-12-09 05:25:18.635868] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:31:23.821 [2024-12-09 05:25:18.635933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95012 ] 01:31:23.821 [2024-12-09 05:25:18.768996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:31:23.821 [2024-12-09 05:25:18.803542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:31:23.821 Running I/O for 90 seconds... 01:31:23.821 11218.00 IOPS, 43.82 MiB/s [2024-12-09T05:26:15.438Z] 12355.00 IOPS, 48.26 MiB/s [2024-12-09T05:26:15.438Z] 12673.00 IOPS, 49.50 MiB/s [2024-12-09T05:26:15.438Z] 12855.50 IOPS, 50.22 MiB/s [2024-12-09T05:26:15.438Z] 12922.00 IOPS, 50.48 MiB/s [2024-12-09T05:26:15.438Z] 12978.83 IOPS, 50.70 MiB/s [2024-12-09T05:26:15.438Z] 13020.00 IOPS, 50.86 MiB/s [2024-12-09T05:26:15.438Z] [2024-12-09 05:25:28.308434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.308742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.308826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.308870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.308907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.308950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.308983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.309015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.309052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.309092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.309137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.309174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.309217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.309253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.309299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.309332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.309375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.309413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.309445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.309476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.309543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.309590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.309623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.309653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.309697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.309731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.309770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.309806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.309845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.309876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.309924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.309960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.309997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.310034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.310073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.310102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.310141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.310172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.310209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.821 [2024-12-09 05:25:28.310247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.310290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.821 [2024-12-09 05:25:28.310325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.310360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.821 [2024-12-09 05:25:28.310391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.310434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.821 [2024-12-09 05:25:28.310469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.310506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.821 [2024-12-09 05:25:28.310541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.310589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.821 [2024-12-09 05:25:28.310625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.310671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.821 [2024-12-09 05:25:28.310705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.311285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.821 [2024-12-09 05:25:28.311350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.311393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.821 [2024-12-09 05:25:28.311431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.311470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.821 [2024-12-09 05:25:28.311500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.311546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.311593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.311638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.311675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.311711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.311745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.311783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.311817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.311857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.311886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.311926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.311961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.311997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.312030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.312070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.312103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.312139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.312385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.312423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.312432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.312445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.312452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.312465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.312472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.312484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.312491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.312503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.312509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.312521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.312528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.312539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.312546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.312558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.312574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.312586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.312598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:31:23.821 [2024-12-09 05:25:28.312610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.821 [2024-12-09 05:25:28.312617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.312629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.822 [2024-12-09 05:25:28.312635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.312647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.312654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.312666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.312673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.312684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.312691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.312703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.312710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.312722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.312729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.312741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.312748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.312760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.312766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.312778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.312785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.312796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.312803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.312815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.312822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.312837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.312843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.312855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.312862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.312873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.312880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.312892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.312899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.312910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.312917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.312929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.312935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.312947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.312953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.312965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.312971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.312983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.312990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.313001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.313008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.313020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.313027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.313038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.313045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.314179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.314244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.314259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.314266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.314278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.314284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.314296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.314303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.314315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.314321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.315303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.315316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.315330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.315338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.315351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.315358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.315371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.315378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.315391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.315397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.315409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.315416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.317830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.317849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.317864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.317877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.317889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.317896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.317908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:40248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.317914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.317926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.317932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.317944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.317951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.317962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.317969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.317981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.317987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.317999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.318005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.318017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.318023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.318035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.318042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.318053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.318060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.318072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.318078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.318090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.318099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.318140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.318153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.318179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.318193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.318222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.318229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.318241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.318247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.318259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.318265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.318277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.318284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.318295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:40384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.318302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.318313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:40392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.822 [2024-12-09 05:25:28.318320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:31:23.822 [2024-12-09 05:25:28.318331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:28.318791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:28.318798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:31:23.823 13015.12 IOPS, 50.84 MiB/s [2024-12-09T05:26:15.440Z] 12949.44 IOPS, 50.58 MiB/s [2024-12-09T05:26:15.440Z] 12917.00 IOPS, 50.46 MiB/s [2024-12-09T05:26:15.440Z] 12882.91 IOPS, 50.32 MiB/s [2024-12-09T05:26:15.440Z] 12853.58 IOPS, 50.21 MiB/s [2024-12-09T05:26:15.440Z] 12848.08 IOPS, 50.19 MiB/s [2024-12-09T05:26:15.440Z] 12827.00 IOPS, 50.11 MiB/s [2024-12-09T05:26:15.440Z] [2024-12-09 05:25:34.736460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:34.736502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:34.736529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:34.736549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:34.736591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:34.736610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:34.736630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.823 [2024-12-09 05:25:34.736648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.823 [2024-12-09 05:25:34.736667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.823 [2024-12-09 05:25:34.736686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.823 [2024-12-09 05:25:34.736704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.823 [2024-12-09 05:25:34.736723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.823 [2024-12-09 05:25:34.736742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.823 [2024-12-09 05:25:34.736761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.823 [2024-12-09 05:25:34.736780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.823 [2024-12-09 05:25:34.736799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.823 [2024-12-09 05:25:34.736819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.823 [2024-12-09 05:25:34.736842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.823 [2024-12-09 05:25:34.736861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.823 [2024-12-09 05:25:34.736880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.823 [2024-12-09 05:25:34.736900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.823 [2024-12-09 05:25:34.736918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.823 [2024-12-09 05:25:34.736939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.823 [2024-12-09 05:25:34.736958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.823 [2024-12-09 05:25:34.736977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.736989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.823 [2024-12-09 05:25:34.736996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.737008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.823 [2024-12-09 05:25:34.737015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:31:23.823 [2024-12-09 05:25:34.737027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.824 [2024-12-09 05:25:34.737113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.737828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.824 [2024-12-09 05:25:34.737847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.824 [2024-12-09 05:25:34.737866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.824 [2024-12-09 05:25:34.737886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.824 [2024-12-09 05:25:34.737905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.824 [2024-12-09 05:25:34.737924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.824 [2024-12-09 05:25:34.737944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.824 [2024-12-09 05:25:34.737963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.737975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.824 [2024-12-09 05:25:34.737982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.738207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.824 [2024-12-09 05:25:34.738218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.738237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.738244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.738261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.738268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.738280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.738287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.738299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.738306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.738318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.738325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.738337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.738344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.738356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.738363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.738376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.738383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.738395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.738402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.738414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.738421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.738433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.738440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.738452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.738459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.738471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.738478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.738494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.738501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.738513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.738520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.738534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.738541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.738554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.738571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.738583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.738590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:31:23.824 [2024-12-09 05:25:34.738602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.824 [2024-12-09 05:25:34.738609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.738622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.738628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.738640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.738647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.738659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.738666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.738678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.738685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.738697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.738704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.738717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.738723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.738739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.738746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.738758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.738764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.738777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.738784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.738796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.738802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.738815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.738821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.738834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.738840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.738853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.738860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.738873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.738880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.738893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.738899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.738911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.738919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.738931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.738937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.738949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.738956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.738969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.738978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.738990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.738997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.739902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.825 [2024-12-09 05:25:34.739923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.825 [2024-12-09 05:25:34.739943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.825 [2024-12-09 05:25:34.739962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.825 [2024-12-09 05:25:34.739981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.739993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.825 [2024-12-09 05:25:34.740000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.740012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.825 [2024-12-09 05:25:34.740019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.740031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.825 [2024-12-09 05:25:34.740038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.740050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.740057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.740069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.740076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.740088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.740095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.740112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.740119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:31:23.825 [2024-12-09 05:25:34.740131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.825 [2024-12-09 05:25:34.740138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.826 [2024-12-09 05:25:34.740524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.740984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.740991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.741004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.826 [2024-12-09 05:25:34.741011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.741023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.826 [2024-12-09 05:25:34.741030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.741042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.826 [2024-12-09 05:25:34.741048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.741060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.826 [2024-12-09 05:25:34.741067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.741079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.826 [2024-12-09 05:25:34.741086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.741101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.826 [2024-12-09 05:25:34.741109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.741121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.826 [2024-12-09 05:25:34.741128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.741141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.826 [2024-12-09 05:25:34.741148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.741589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.826 [2024-12-09 05:25:34.741601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:31:23.826 [2024-12-09 05:25:34.741614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.827 [2024-12-09 05:25:34.741621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.741633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.741640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.741652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.741659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.741671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.741677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.741689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.741696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.741708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.741715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.741727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.741734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.741746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.741752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.741769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.741776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.741788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.741795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.741807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.741813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.741825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.741832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.741844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.741851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.741863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.741869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.741883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.741890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.741902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.741909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.741921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.741927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.741940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.741946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.741958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.741965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.741977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.741984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.741996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.742429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.742441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.750837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.750879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.750889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.750909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.750917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.750930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.750937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.750949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.750956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.750968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.750975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.750987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.750994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.751006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.751013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.751025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.751032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.751044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.751051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.751063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.751070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.751082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.751089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.751101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.751108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.751129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.827 [2024-12-09 05:25:34.751145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:31:23.827 [2024-12-09 05:25:34.751182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.751200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.751229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.751245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.751274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.751290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.751320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.751336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.751365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.751382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.751412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.751429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.752625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.752658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.752692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.752709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.752739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.828 [2024-12-09 05:25:34.752757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.752786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.828 [2024-12-09 05:25:34.752802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.752830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.828 [2024-12-09 05:25:34.752847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.752875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.828 [2024-12-09 05:25:34.752892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.752920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.828 [2024-12-09 05:25:34.752949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.752978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.828 [2024-12-09 05:25:34.752995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.753023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.828 [2024-12-09 05:25:34.753040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.753068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.753085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.753114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.753131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.753159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.753175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.753204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.753220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.753250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.753266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.753295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.753311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.753340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.753357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.753386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.753402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.753431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.753447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.753476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.753499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.753528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.753544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.753588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.753605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.753635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.753651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.753680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.753697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.753726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.753742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.753772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.753789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.753818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.753834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.753863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.753880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.753909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.753926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.753954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.753971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.754000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.754016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.754045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.754068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.754098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.754115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.754144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.828 [2024-12-09 05:25:34.754160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.754189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.754205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.754235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.754252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.754280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.754297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.754326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.754342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.754371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.754388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.754417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.754433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.754462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.754478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.754508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.754524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.754553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.754583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.754612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.754629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.754664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.754682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.754711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.754728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.754757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.754774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.754803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.754820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.754849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.754866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.754894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.754911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.754940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.754957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:31:23.828 [2024-12-09 05:25:34.754986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.828 [2024-12-09 05:25:34.755003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.755032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.755048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.755077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.755094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.755123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.755140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.755169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.755186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.755225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.755242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.755271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.755287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.755317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.829 [2024-12-09 05:25:34.755333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.755362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.829 [2024-12-09 05:25:34.755378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.755407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.829 [2024-12-09 05:25:34.755423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.755452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.829 [2024-12-09 05:25:34.755468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.755498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.829 [2024-12-09 05:25:34.755514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.755544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.829 [2024-12-09 05:25:34.755575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.756662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.829 [2024-12-09 05:25:34.756690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.756721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.829 [2024-12-09 05:25:34.756739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.756769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.829 [2024-12-09 05:25:34.756785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.756815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.756831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.756860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.756886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.756916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.756932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.756961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.756978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.757007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.757023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.757052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.757069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.757098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.757115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.757144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.757161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.757190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.757207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.757236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.757252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.757281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.757298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.757327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.757344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.757373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.757390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.757419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.757441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.757471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.757487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.757516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.757533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.757576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.757594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.757623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.757640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.757669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.757686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.757715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.757732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.757761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.757778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.757807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.757824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.757853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.757870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.757898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.757915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.757944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.757961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.757990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.758006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.758041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.758058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.758088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.758106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.758135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.758151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.758180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.758197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.758226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.758242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.758272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.758288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.758317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.758334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.758363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.758379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.758408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.758425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.758454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.758470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.758499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.758516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.758545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.758574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.758610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.829 [2024-12-09 05:25:34.758628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:31:23.829 [2024-12-09 05:25:34.758657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.758673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.758702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.758719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.758748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.758765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.758794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.758810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.758840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.758856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.758885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.758902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.758931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.758947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.758976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.758993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.759022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.759039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.759068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.759085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.759113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.759130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.759159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.759181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.759210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.759227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.759256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.759273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.759302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.759319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.759348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.759365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.759394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.759411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.759440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.759456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.759486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.759502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.759531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.759548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.759592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.759609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.759639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.759656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.760719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.760754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.760786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.760814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.760843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.760860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.760889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.830 [2024-12-09 05:25:34.760906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.760934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.830 [2024-12-09 05:25:34.760951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.760979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.830 [2024-12-09 05:25:34.760996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.761025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.830 [2024-12-09 05:25:34.761041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.761070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.830 [2024-12-09 05:25:34.761086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.761115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.830 [2024-12-09 05:25:34.761131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.761160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.830 [2024-12-09 05:25:34.761177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.761205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.761222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.761251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.761267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.761296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.761313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.761342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.761364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.761394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.761410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.761439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.761456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.761485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.761501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.761530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.761546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.761590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.761608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.761637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.761654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.761683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.761699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.761728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.761745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.761774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.761790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.761819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.761836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.761865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.761882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.761911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.761928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.761964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.761981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.762010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.762027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.762056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.762072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.762101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.762118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.762146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.762163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.762192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.830 [2024-12-09 05:25:34.762209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:31:23.830 [2024-12-09 05:25:34.762238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.762254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.762283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.831 [2024-12-09 05:25:34.762300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.762329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.762345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.762374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.762391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.762419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.762436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.762465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.762482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.762516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.762533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.762577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.762594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.762623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.762639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.762668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.762684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.762713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.762729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.762758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.762774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.762803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.762819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.762848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.762864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.762893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.762909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.762938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.762954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.762983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.762999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.763027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.763043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.763073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.763096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.763126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.763142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.763172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.763188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.763217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.763234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.763263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.763280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.763309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.763325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.763355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.763371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.763400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.763417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.763446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.831 [2024-12-09 05:25:34.763463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.763491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.831 [2024-12-09 05:25:34.763508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.763537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.831 [2024-12-09 05:25:34.763553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.763598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.831 [2024-12-09 05:25:34.763616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.763645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.831 [2024-12-09 05:25:34.763668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.764803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.831 [2024-12-09 05:25:34.764834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.764847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.831 [2024-12-09 05:25:34.764855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.764867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.831 [2024-12-09 05:25:34.764874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.764886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.831 [2024-12-09 05:25:34.764893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.764905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.764911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.764924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.764931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.764942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.764949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.764962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.764969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.764980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.764987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.764999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.765006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.765018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.765025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.765037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.765044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.765061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.765068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.765080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.765087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.765099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.765106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.765118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.765125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.765137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.765143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.765156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.765163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.765175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.765182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.765194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.765201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.765213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.765220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.765232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.765239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.765251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.765257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.765270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.765277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.765293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.765300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:31:23.831 [2024-12-09 05:25:34.765312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.831 [2024-12-09 05:25:34.765319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.765981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.765993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.766003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.766022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.766041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.766060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.766494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.766515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.766534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.766553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.832 [2024-12-09 05:25:34.766581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.832 [2024-12-09 05:25:34.766600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.832 [2024-12-09 05:25:34.766619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.832 [2024-12-09 05:25:34.766638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.832 [2024-12-09 05:25:34.766657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.832 [2024-12-09 05:25:34.766680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.832 [2024-12-09 05:25:34.766700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.766718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.766737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.766757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.766775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.766795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.766813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.766832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.832 [2024-12-09 05:25:34.766851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:31:23.832 [2024-12-09 05:25:34.766863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.766870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.766882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.766889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.766903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.766910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.766923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.766930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.766942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.766949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.766961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.766968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.766980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.766987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.766999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.833 [2024-12-09 05:25:34.767159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.767627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.833 [2024-12-09 05:25:34.767646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.833 [2024-12-09 05:25:34.767665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.833 [2024-12-09 05:25:34.767684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.767696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.833 [2024-12-09 05:25:34.767703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.768136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.833 [2024-12-09 05:25:34.768148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.768162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.833 [2024-12-09 05:25:34.768169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.768181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.833 [2024-12-09 05:25:34.768188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.768200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.833 [2024-12-09 05:25:34.768207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.768219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.833 [2024-12-09 05:25:34.768226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.768238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.768245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.768265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.833 [2024-12-09 05:25:34.768272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:31:23.833 [2024-12-09 05:25:34.768289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.768978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.768990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.769000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.769012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.769019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.769031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.769039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.769051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.769058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.769070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.769076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.769089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.769096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.769108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.769114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.769127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.769134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.769145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.769152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.769164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.769171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.769183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.769190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.769202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.769209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.769221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.769228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.769243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.769250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.769262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.769269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.769281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.769288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.769300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.769307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.769319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.769326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.769338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.769345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.769357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.769364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.769376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.834 [2024-12-09 05:25:34.769383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:31:23.834 [2024-12-09 05:25:34.769816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.769827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.769841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.769847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.769860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.769867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.769879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.769886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.769902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.769909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.769922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.835 [2024-12-09 05:25:34.769928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.769940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.835 [2024-12-09 05:25:34.769947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.769959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.835 [2024-12-09 05:25:34.769966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.769978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.835 [2024-12-09 05:25:34.769985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.769997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.835 [2024-12-09 05:25:34.770004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.835 [2024-12-09 05:25:34.770023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.835 [2024-12-09 05:25:34.770041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.835 [2024-12-09 05:25:34.770503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.835 [2024-12-09 05:25:34.770917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:31:23.835 [2024-12-09 05:25:34.770928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.770937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.770949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.770956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.770968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.770975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.770987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.836 [2024-12-09 05:25:34.770994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.836 [2024-12-09 05:25:34.771013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.836 [2024-12-09 05:25:34.771032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.836 [2024-12-09 05:25:34.771467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.836 [2024-12-09 05:25:34.771488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.836 [2024-12-09 05:25:34.771511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.836 [2024-12-09 05:25:34.771531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.836 [2024-12-09 05:25:34.771550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.836 [2024-12-09 05:25:34.771578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.771597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.771616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.771635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.771654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.771674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.771693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.771711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.771730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.771753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.771772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.771790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.771809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.771828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.771847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.771866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.771887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.771906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.771925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.771944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.771962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.771984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.771997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.772004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.772016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.772023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.772035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.772042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.772054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.772061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.772073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.772079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.772091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.772098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.772110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.772117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.772130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.772136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.772148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.772155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.772167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.772174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.772186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.772193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.772205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.772212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.772227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.772234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.772246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.772259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.772272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.772279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.772291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.772298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.772310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.772317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.772328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.772335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.772347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.772354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.772366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.836 [2024-12-09 05:25:34.772373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:31:23.836 [2024-12-09 05:25:34.772385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.772392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.772404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.772411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.772423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.772429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.772441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.772448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.772463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.772470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.772482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.772489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.772501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.772508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.772520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.772527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.772539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.772546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.772558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.772573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.772585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.772592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.772603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.772611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.772623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.772630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.772642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.772649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.772660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.772667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.772679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.772686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.772698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.772708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.837 [2024-12-09 05:25:34.773269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.837 [2024-12-09 05:25:34.773287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.837 [2024-12-09 05:25:34.773309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.837 [2024-12-09 05:25:34.773328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.837 [2024-12-09 05:25:34.773347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.837 [2024-12-09 05:25:34.773373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.837 [2024-12-09 05:25:34.773392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.837 [2024-12-09 05:25:34.773870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.837 [2024-12-09 05:25:34.773947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:31:23.837 [2024-12-09 05:25:34.773959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.773965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.773978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.773984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.773996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.774003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.774022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.774040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.774059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.774078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.774100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.774120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.774139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.774158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.774176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.774195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.774214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.774233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.774252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.774271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.774290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.774308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.774330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.838 [2024-12-09 05:25:34.774349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.838 [2024-12-09 05:25:34.774368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.838 [2024-12-09 05:25:34.774819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.838 [2024-12-09 05:25:34.774839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.838 [2024-12-09 05:25:34.774859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.838 [2024-12-09 05:25:34.774878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.838 [2024-12-09 05:25:34.774896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.838 [2024-12-09 05:25:34.774915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.838 [2024-12-09 05:25:34.774935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.774953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.774972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.774984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.774992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.775476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:31:23.838 [2024-12-09 05:25:34.775488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.838 [2024-12-09 05:25:34.779423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.779986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.779998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.780005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.780493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.780516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.780534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.780553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.780584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.780603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.780621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.839 [2024-12-09 05:25:34.780646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.839 [2024-12-09 05:25:34.780665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.839 [2024-12-09 05:25:34.780683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.839 [2024-12-09 05:25:34.780702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.839 [2024-12-09 05:25:34.780721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.839 [2024-12-09 05:25:34.780739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.839 [2024-12-09 05:25:34.780758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.780777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.780796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.780814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.780833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.780852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.780874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.780893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.780911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.780931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.839 [2024-12-09 05:25:34.780950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:31:23.839 [2024-12-09 05:25:34.780962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.780969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.780981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.780987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.780999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.840 [2024-12-09 05:25:34.781216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.781681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.781693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.840 [2024-12-09 05:25:34.781700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.782125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.840 [2024-12-09 05:25:34.782137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.782150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.840 [2024-12-09 05:25:34.782157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.782169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.840 [2024-12-09 05:25:34.782175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.782188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.840 [2024-12-09 05:25:34.782194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.782206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.840 [2024-12-09 05:25:34.782213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.782229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.840 [2024-12-09 05:25:34.782236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.782248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.840 [2024-12-09 05:25:34.782255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.782267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.840 [2024-12-09 05:25:34.782274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.782285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.782293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.782305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.782312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.782324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.782331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.782343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.782349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.782361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.782368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.782380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.782387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.782399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.782406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.782418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.782425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.782437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.782444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.782458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.782465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.782478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.840 [2024-12-09 05:25:34.782484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:31:23.840 [2024-12-09 05:25:34.782496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.782987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.782994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.841 [2024-12-09 05:25:34.783948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.841 [2024-12-09 05:25:34.783968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.783981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.841 [2024-12-09 05:25:34.783988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.784000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.841 [2024-12-09 05:25:34.784007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.784019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.841 [2024-12-09 05:25:34.784026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:31:23.841 [2024-12-09 05:25:34.784038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.841 [2024-12-09 05:25:34.784045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.842 [2024-12-09 05:25:34.784064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.842 [2024-12-09 05:25:34.784085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.842 [2024-12-09 05:25:34.784557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.784986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.784998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.785005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.785017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.842 [2024-12-09 05:25:34.785024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.785469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.842 [2024-12-09 05:25:34.785482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.785495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.842 [2024-12-09 05:25:34.785502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:31:23.842 [2024-12-09 05:25:34.785514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.843 [2024-12-09 05:25:34.785521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.843 [2024-12-09 05:25:34.785540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.843 [2024-12-09 05:25:34.785568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.843 [2024-12-09 05:25:34.785587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.843 [2024-12-09 05:25:34.785606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.843 [2024-12-09 05:25:34.785625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.843 [2024-12-09 05:25:34.785644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.785663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.785682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.785700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.785724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.785744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.785762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.785781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.785800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.785819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.785838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.785856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.785875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.785894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.785913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.785932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.785953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.785972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.785984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.785991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.786003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.786010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.786022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.786029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.786041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.786048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.786060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.843 [2024-12-09 05:25:34.790877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:31:23.843 [2024-12-09 05:25:34.790893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.790903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.790919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.790932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.790948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.790958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.790974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.790984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.791590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.791606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.791624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.791634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.791650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.791660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.791676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.791685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.791701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.791711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.791727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.791736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.791753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.791762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.791778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.791788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.791804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.791813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.791830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.844 [2024-12-09 05:25:34.791846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.791862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.844 [2024-12-09 05:25:34.791871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.791888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.844 [2024-12-09 05:25:34.791897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.791913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.844 [2024-12-09 05:25:34.791922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.791938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.844 [2024-12-09 05:25:34.791948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.791964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.844 [2024-12-09 05:25:34.791973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.791989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.844 [2024-12-09 05:25:34.791998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.844 [2024-12-09 05:25:34.792654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.792982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.792991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.793007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.844 [2024-12-09 05:25:34.793017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:31:23.844 [2024-12-09 05:25:34.793033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.793042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.793058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.793067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.793084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.793093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.793109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.793118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.793134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.793147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.793163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.793172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.793188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.793198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.793214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.793223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.793239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.793249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.794491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.794510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.794531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.845 [2024-12-09 05:25:34.794542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.794570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.845 [2024-12-09 05:25:34.794582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.794598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.845 [2024-12-09 05:25:34.794608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.794624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.845 [2024-12-09 05:25:34.794633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.794649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.845 [2024-12-09 05:25:34.794659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.794675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.845 [2024-12-09 05:25:34.794684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.794700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.845 [2024-12-09 05:25:34.794716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.794733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.845 [2024-12-09 05:25:34.794742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.794772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.845 [2024-12-09 05:25:34.794783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.794800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.794809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.794826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.794835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.794851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.794861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.794877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.794886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.794902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.794912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.794928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.794937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.794954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.794964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.794980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.794989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:31:23.845 [2024-12-09 05:25:34.795667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.845 [2024-12-09 05:25:34.795676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.795693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.795706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.795723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.795732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.795749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.795758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.795774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.795783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.795800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.795809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.795825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.795834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.795851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.795860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.795876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.795886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.795902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.795911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.795927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.795937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.795953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.795962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.795978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.795988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.796017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.796043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.796069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.796094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.796120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.796145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.796171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.796198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.796464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.796508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.796538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.796581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.796617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.796649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.796679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.796709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.796739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.796769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.846 [2024-12-09 05:25:34.796798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.846 [2024-12-09 05:25:34.796828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.846 [2024-12-09 05:25:34.796858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.846 [2024-12-09 05:25:34.796888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.846 [2024-12-09 05:25:34.796917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.846 [2024-12-09 05:25:34.796947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.796968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.846 [2024-12-09 05:25:34.796977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.797002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.797011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.797032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.797042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.797062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.797072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.797093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.797102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.797122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.797132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.797152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.797162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.797183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.797192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.797213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.797222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.797243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.797252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.797273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.797282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.797303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.797312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.797333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.797342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.797366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.797376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.797397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.797406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.797427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.797436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.797457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.797466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.797487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.797496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.797516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.797526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.797547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.797556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:31:23.846 [2024-12-09 05:25:34.797586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.846 [2024-12-09 05:25:34.797596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.797616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.797625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.797646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.797656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.797677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.797687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.797707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.847 [2024-12-09 05:25:34.797717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.797737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.797751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.797771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.797781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.797802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.797811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.797832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.797841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.797861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.797871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.797891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.797901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.797921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.797930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.797951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.797960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.797981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.797990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.798011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.798020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.798041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.798050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.798071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.798080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.798101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.798113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.798134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.798144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.798164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.798174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.798195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.798204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.798225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.798234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.798255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.798264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.798285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.798294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.798315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.798324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.798345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.798354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.798375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.798385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:34.798537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:34.798549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:31:23.847 12264.60 IOPS, 47.91 MiB/s [2024-12-09T05:26:15.464Z] 11977.50 IOPS, 46.79 MiB/s [2024-12-09T05:26:15.464Z] 12037.00 IOPS, 47.02 MiB/s [2024-12-09T05:26:15.464Z] 12093.94 IOPS, 47.24 MiB/s [2024-12-09T05:26:15.464Z] 12146.89 IOPS, 47.45 MiB/s [2024-12-09T05:26:15.464Z] 12183.65 IOPS, 47.59 MiB/s [2024-12-09T05:26:15.464Z] 12231.10 IOPS, 47.78 MiB/s [2024-12-09T05:26:15.464Z] [2024-12-09 05:25:41.572149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.847 [2024-12-09 05:25:41.572191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.572236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.847 [2024-12-09 05:25:41.572245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.572265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.847 [2024-12-09 05:25:41.572273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.572285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.847 [2024-12-09 05:25:41.572291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.572303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.847 [2024-12-09 05:25:41.572310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.572323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.847 [2024-12-09 05:25:41.572329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.572341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.847 [2024-12-09 05:25:41.572348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.572361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.847 [2024-12-09 05:25:41.572368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.573163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.847 [2024-12-09 05:25:41.573174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.573187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.847 [2024-12-09 05:25:41.573195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.573207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:41.573215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.573227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:41.573234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.573246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:41.573253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.573265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:41.573279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.573292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:41.573299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.573311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:41.573318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.573331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:41.573337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.573350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.847 [2024-12-09 05:25:41.573357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.573370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:41.573377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.573389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:41.573396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.573409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:41.573416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.573428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:41.573435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.573447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:41.573454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.573466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:41.573473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.573485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:41.573492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.573504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:41.573518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.573531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:41.573538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.573550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:41.573557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.573578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.847 [2024-12-09 05:25:41.573585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:31:23.847 [2024-12-09 05:25:41.573597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.573604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.573616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.573623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.573636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.573643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.573655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.573662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.573674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.573681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.573694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.573700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.573713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.573720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.573732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.573739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.573752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.573761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.573774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.573781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.573794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.573801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.573813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.573820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.573833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.573840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.573910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.573918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.573934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.573940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.573956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:107880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.573963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.573977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.573983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.573998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:31:23.848 [2024-12-09 05:25:41.574840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:108176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.848 [2024-12-09 05:25:41.574847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.574862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.574869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.574886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:108192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.574893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.574908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.574915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.574931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.574938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.574953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.574964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.574984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.574990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.849 [2024-12-09 05:25:41.575423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.849 [2024-12-09 05:25:41.575445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.849 [2024-12-09 05:25:41.575468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.849 [2024-12-09 05:25:41.575491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:31:23.849 [2024-12-09 05:25:41.575607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.849 [2024-12-09 05:25:41.575614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:41.575629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.850 [2024-12-09 05:25:41.575636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:41.575652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.850 [2024-12-09 05:25:41.575659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:41.575674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.850 [2024-12-09 05:25:41.575681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:41.575696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.850 [2024-12-09 05:25:41.575703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:41.575720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.850 [2024-12-09 05:25:41.575727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:41.575743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.850 [2024-12-09 05:25:41.575749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:41.575765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.850 [2024-12-09 05:25:41.575772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:41.575787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.850 [2024-12-09 05:25:41.575794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:41.575813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.850 [2024-12-09 05:25:41.575820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:41.575835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.850 [2024-12-09 05:25:41.575842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:41.575858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.850 [2024-12-09 05:25:41.575865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:41.575881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.850 [2024-12-09 05:25:41.575888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:41.575903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.850 [2024-12-09 05:25:41.575910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:31:23.850 11782.36 IOPS, 46.02 MiB/s [2024-12-09T05:26:15.467Z] 11270.09 IOPS, 44.02 MiB/s [2024-12-09T05:26:15.467Z] 10800.50 IOPS, 42.19 MiB/s [2024-12-09T05:26:15.467Z] 10368.48 IOPS, 40.50 MiB/s [2024-12-09T05:26:15.467Z] 9969.69 IOPS, 38.94 MiB/s [2024-12-09T05:26:15.467Z] 9600.44 IOPS, 37.50 MiB/s [2024-12-09T05:26:15.467Z] 9257.57 IOPS, 36.16 MiB/s [2024-12-09T05:26:15.467Z] 9299.72 IOPS, 36.33 MiB/s [2024-12-09T05:26:15.467Z] 9423.47 IOPS, 36.81 MiB/s [2024-12-09T05:26:15.467Z] 9530.00 IOPS, 37.23 MiB/s [2024-12-09T05:26:15.467Z] 9642.34 IOPS, 37.67 MiB/s [2024-12-09T05:26:15.467Z] 9747.21 IOPS, 38.08 MiB/s [2024-12-09T05:26:15.467Z] 9845.00 IOPS, 38.46 MiB/s [2024-12-09T05:26:15.467Z] [2024-12-09 05:25:54.627792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.850 [2024-12-09 05:25:54.627820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.627834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.850 [2024-12-09 05:25:54.627843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.627851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.850 [2024-12-09 05:25:54.627858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.627866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.850 [2024-12-09 05:25:54.627873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.627881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.850 [2024-12-09 05:25:54.627888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.627896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.850 [2024-12-09 05:25:54.627902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.627924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.850 [2024-12-09 05:25:54.627931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.627939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.850 [2024-12-09 05:25:54.627946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.627954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.850 [2024-12-09 05:25:54.627961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.627969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.850 [2024-12-09 05:25:54.627975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.627983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.850 [2024-12-09 05:25:54.627990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.627998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.850 [2024-12-09 05:25:54.628005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.628013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.850 [2024-12-09 05:25:54.628019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.628027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.850 [2024-12-09 05:25:54.628034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.628042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.850 [2024-12-09 05:25:54.628048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.628057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.850 [2024-12-09 05:25:54.628064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.628072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.850 [2024-12-09 05:25:54.628079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.628087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.850 [2024-12-09 05:25:54.628094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.628103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.850 [2024-12-09 05:25:54.628113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.628121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.850 [2024-12-09 05:25:54.628128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.628137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.850 [2024-12-09 05:25:54.628143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.628151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.850 [2024-12-09 05:25:54.628158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.628166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.850 [2024-12-09 05:25:54.628173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.850 [2024-12-09 05:25:54.628181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.851 [2024-12-09 05:25:54.628799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.851 [2024-12-09 05:25:54.628805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.628813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:23.852 [2024-12-09 05:25:54.628820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.628828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.628836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.628844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.628851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.628859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.628866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.628874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.628881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.628889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.628896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:31:23.852 (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.628907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.628914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.628922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.628929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.628938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.628944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.628952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.628959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.628967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.628974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.628982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.628989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.628998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.629004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.629013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.629020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.629028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.629035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.629043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.629050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.629057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.629064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.629073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.629080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.629088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.629100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.629109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.629115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.629126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.629133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.629141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.629148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.629156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.629163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.629171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.629178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.629186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.629193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.629201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.629207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.629215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.629222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.629230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.629237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.629245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.629251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.629260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.629266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.629275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.629282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.629290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.629299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.629307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:23.852 [2024-12-09 05:25:54.629314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.852 [2024-12-09 05:25:54.629334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81840 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81848 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81856 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81864 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81872 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81880 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81888 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81896 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81904 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81912 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81928 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81936 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81944 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81952 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81960 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81968 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81976 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81984 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81992 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82000 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82008 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82016 len:8 PRP1 0x0 PRP2 0x0 01:31:23.853 [2024-12-09 05:25:54.629879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.853 [2024-12-09 05:25:54.629886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.853 [2024-12-09 05:25:54.629890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.853 [2024-12-09 05:25:54.629898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82024 len:8 PRP1 0x0 PRP2 0x0 01:31:23.854 [2024-12-09 05:25:54.629904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.854 [2024-12-09 05:25:54.629911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.854 [2024-12-09 05:25:54.629915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.854 [2024-12-09 05:25:54.629921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82032 len:8 PRP1 0x0 PRP2 0x0 01:31:23.854 [2024-12-09 05:25:54.629927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.854 [2024-12-09 05:25:54.629934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.854 [2024-12-09 05:25:54.629938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.854 [2024-12-09 05:25:54.629944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82040 len:8 PRP1 0x0 PRP2 0x0 01:31:23.854 [2024-12-09 05:25:54.629951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.854 [2024-12-09 05:25:54.629958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.854 [2024-12-09 05:25:54.629963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.854 [2024-12-09 05:25:54.629968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82048 len:8 PRP1 0x0 PRP2 0x0 01:31:23.854 [2024-12-09 05:25:54.629974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.854 [2024-12-09 05:25:54.629981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.854 [2024-12-09 05:25:54.638533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.854 [2024-12-09 05:25:54.638556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82056 len:8 PRP1 0x0 PRP2 0x0 01:31:23.854 [2024-12-09 05:25:54.638574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.854 [2024-12-09 05:25:54.638585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.854 [2024-12-09 05:25:54.638590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.854 [2024-12-09 05:25:54.638595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82064 len:8 PRP1 0x0 PRP2 0x0 01:31:23.854 [2024-12-09 05:25:54.638602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.854 [2024-12-09 05:25:54.638609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.854 [2024-12-09 05:25:54.638614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.854 [2024-12-09 05:25:54.638619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82072 len:8 PRP1 0x0 PRP2 0x0 01:31:23.854 [2024-12-09 05:25:54.638626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.854 [2024-12-09 05:25:54.638641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.854 [2024-12-09 05:25:54.638646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.854 [2024-12-09 05:25:54.638651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82080 len:8 PRP1 0x0 PRP2 0x0 01:31:23.854 [2024-12-09 05:25:54.638657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.854 [2024-12-09 05:25:54.638664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:23.854 [2024-12-09 05:25:54.638669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:23.854 [2024-12-09 05:25:54.638675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81440 len:8 PRP1 0x0 PRP2 0x0 01:31:23.854 [2024-12-09 05:25:54.638682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.854 [2024-12-09 05:25:54.638794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:31:23.854 [2024-12-09 05:25:54.638807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.854 [2024-12-09 05:25:54.638815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:31:23.854 [2024-12-09 05:25:54.638822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.854 [2024-12-09 05:25:54.638829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:31:23.854 [2024-12-09 05:25:54.638836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.854 [2024-12-09 05:25:54.638843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:31:23.854 [2024-12-09 05:25:54.638850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:23.854 [2024-12-09 05:25:54.638857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64e1a0 is same with the state(6) to be set 01:31:23.854 [2024-12-09 05:25:54.639902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:31:23.854 [2024-12-09 05:25:54.639925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64e1a0 (9): Bad file descriptor 01:31:23.854 [2024-12-09 05:25:54.639999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:31:23.854 [2024-12-09 05:25:54.640012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64e1a0 with addr=10.0.0.3, port=4421 01:31:23.854 [2024-12-09 05:25:54.640020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64e1a0 is same with the state(6) to be set 01:31:23.854 [2024-12-09 05:25:54.640032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64e1a0 (9): Bad file descriptor 01:31:23.854 [2024-12-09 05:25:54.640042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:31:23.854 [2024-12-09 05:25:54.640049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:31:23.854 [2024-12-09 05:25:54.640057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:31:23.854 [2024-12-09 05:25:54.640063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:31:23.854 [2024-12-09 05:25:54.640071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:31:23.854 9914.97 IOPS, 38.73 MiB/s [2024-12-09T05:26:15.471Z] 9977.64 IOPS, 38.98 MiB/s [2024-12-09T05:26:15.471Z] 10044.24 IOPS, 39.24 MiB/s [2024-12-09T05:26:15.471Z] 10102.79 IOPS, 39.46 MiB/s [2024-12-09T05:26:15.471Z] 10156.62 IOPS, 39.67 MiB/s [2024-12-09T05:26:15.471Z] 10207.48 IOPS, 39.87 MiB/s [2024-12-09T05:26:15.471Z] 10258.46 IOPS, 40.07 MiB/s [2024-12-09T05:26:15.471Z] 10307.24 IOPS, 40.26 MiB/s [2024-12-09T05:26:15.471Z] 10354.37 IOPS, 40.45 MiB/s [2024-12-09T05:26:15.471Z] 10398.14 IOPS, 40.62 MiB/s [2024-12-09T05:26:15.471Z] [2024-12-09 05:26:04.706937] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 01:31:23.854 10446.67 IOPS, 40.81 MiB/s [2024-12-09T05:26:15.471Z] 10494.78 IOPS, 41.00 MiB/s [2024-12-09T05:26:15.471Z] 10545.06 IOPS, 41.19 MiB/s [2024-12-09T05:26:15.471Z] 10592.81 IOPS, 41.38 MiB/s [2024-12-09T05:26:15.471Z] 10630.92 IOPS, 41.53 MiB/s [2024-12-09T05:26:15.471Z] 10676.74 IOPS, 41.71 MiB/s [2024-12-09T05:26:15.471Z] 10718.61 IOPS, 41.87 MiB/s [2024-12-09T05:26:15.471Z] 10757.88 IOPS, 42.02 MiB/s [2024-12-09T05:26:15.471Z] 10794.11 IOPS, 42.16 MiB/s [2024-12-09T05:26:15.471Z] 10831.81 IOPS, 42.31 MiB/s [2024-12-09T05:26:15.471Z] Received shutdown signal, test time was about 54.132125 seconds 01:31:23.854 01:31:23.854 Latency(us) 01:31:23.854 [2024-12-09T05:26:15.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:31:23.854 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:31:23.854 Verification LBA range: start 0x0 length 0x4000 01:31:23.854 Nvme0n1 : 54.13 10834.28 42.32 0.00 0.00 11791.82 237.88 7020619.62 01:31:23.854 [2024-12-09T05:26:15.471Z] =================================================================================================================== 01:31:23.854 [2024-12-09T05:26:15.471Z] Total : 10834.28 42.32 0.00 0.00 11791.82 237.88 7020619.62 01:31:23.854 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 01:31:23.854 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:31:23.854 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 01:31:23.854 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 01:31:23.854 05:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 01:31:23.854 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:31:23.854 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 01:31:23.854 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 01:31:23.854 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:31:23.854 rmmod nvme_tcp 01:31:23.854 rmmod nvme_fabrics 01:31:23.854 rmmod nvme_keyring 01:31:23.854 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:31:23.854 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 01:31:23.854 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 01:31:23.854 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 94914 ']' 01:31:23.854 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 94914 01:31:23.854 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 94914 ']' 01:31:23.854 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 94914 01:31:23.854 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 01:31:23.854 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:31:23.854 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94914 01:31:23.854 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:31:23.854 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:31:23.854 killing process with pid 94914 01:31:23.854 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94914' 01:31:23.855 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 94914 01:31:23.855 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 94914 01:31:23.855 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:31:23.855 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:31:23.855 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:31:23.855 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 01:31:23.855 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 01:31:23.855 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 01:31:23.855 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:31:23.855 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:31:23.855 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:31:23.855 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:31:23.855 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:31:23.855 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:31:23.855 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:31:23.855 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:31:23.855 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:31:24.113 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:31:24.113 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:31:24.113 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:31:24.113 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:31:24.113 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:31:24.113 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:31:24.113 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:31:24.113 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 01:31:24.113 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:31:24.113 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:31:24.113 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:31:24.113 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 01:31:24.113 01:31:24.113 real 0m59.642s 01:31:24.113 user 2m51.406s 01:31:24.113 sys 0m9.264s 01:31:24.113 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 01:31:24.113 ************************************ 01:31:24.113 END TEST nvmf_host_multipath 01:31:24.113 ************************************ 01:31:24.113 05:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:31:24.113 05:26:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 01:31:24.113 05:26:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:31:24.113 05:26:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:31:24.113 05:26:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:31:24.113 ************************************ 01:31:24.113 START TEST nvmf_timeout 01:31:24.113 ************************************ 01:31:24.113 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 01:31:24.113 * Looking for test storage... 01:31:24.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:31:24.113 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:31:24.113 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 01:31:24.113 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:31:24.371 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:31:24.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:24.372 --rc genhtml_branch_coverage=1 01:31:24.372 --rc genhtml_function_coverage=1 01:31:24.372 --rc genhtml_legend=1 01:31:24.372 --rc geninfo_all_blocks=1 01:31:24.372 --rc geninfo_unexecuted_blocks=1 01:31:24.372 01:31:24.372 ' 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:31:24.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:24.372 --rc genhtml_branch_coverage=1 01:31:24.372 --rc genhtml_function_coverage=1 01:31:24.372 --rc genhtml_legend=1 01:31:24.372 --rc geninfo_all_blocks=1 01:31:24.372 --rc geninfo_unexecuted_blocks=1 01:31:24.372 01:31:24.372 ' 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:31:24.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:24.372 --rc genhtml_branch_coverage=1 01:31:24.372 --rc genhtml_function_coverage=1 01:31:24.372 --rc genhtml_legend=1 01:31:24.372 --rc geninfo_all_blocks=1 01:31:24.372 --rc geninfo_unexecuted_blocks=1 01:31:24.372 01:31:24.372 ' 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:31:24.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:24.372 --rc genhtml_branch_coverage=1 01:31:24.372 --rc genhtml_function_coverage=1 01:31:24.372 --rc genhtml_legend=1 01:31:24.372 --rc geninfo_all_blocks=1 01:31:24.372 --rc geninfo_unexecuted_blocks=1 01:31:24.372 01:31:24.372 ' 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:31:24.372 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:31:24.372 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:31:24.373 Cannot find device "nvmf_init_br" 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:31:24.373 Cannot find device "nvmf_init_br2" 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:31:24.373 Cannot find device "nvmf_tgt_br" 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:31:24.373 Cannot find device "nvmf_tgt_br2" 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:31:24.373 Cannot find device "nvmf_init_br" 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:31:24.373 Cannot find device "nvmf_init_br2" 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:31:24.373 Cannot find device "nvmf_tgt_br" 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:31:24.373 Cannot find device "nvmf_tgt_br2" 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:31:24.373 Cannot find device "nvmf_br" 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:31:24.373 Cannot find device "nvmf_init_if" 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:31:24.373 Cannot find device "nvmf_init_if2" 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:31:24.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:31:24.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:31:24.373 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:31:24.631 05:26:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:31:24.631 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:31:24.631 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 01:31:24.631 01:31:24.631 --- 10.0.0.3 ping statistics --- 01:31:24.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:31:24.631 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:31:24.631 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:31:24.631 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.029 ms 01:31:24.631 01:31:24.631 --- 10.0.0.4 ping statistics --- 01:31:24.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:31:24.631 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:31:24.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:31:24.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 01:31:24.631 01:31:24.631 --- 10.0.0.1 ping statistics --- 01:31:24.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:31:24.631 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 01:31:24.631 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:31:24.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:31:24.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 01:31:24.631 01:31:24.631 --- 10.0.0.2 ping statistics --- 01:31:24.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:31:24.632 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 01:31:24.632 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:31:24.632 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 01:31:24.632 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:31:24.632 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:31:24.632 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:31:24.632 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:31:24.632 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:31:24.632 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:31:24.632 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:31:24.632 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 01:31:24.632 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:31:24.632 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 01:31:24.632 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:31:24.632 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=96348 01:31:24.632 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 96348 01:31:24.632 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:31:24.632 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96348 ']' 01:31:24.632 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:31:24.632 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 01:31:24.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:31:24.632 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:31:24.632 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 01:31:24.632 05:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:31:24.632 [2024-12-09 05:26:16.190087] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:31:24.632 [2024-12-09 05:26:16.190141] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:31:24.889 [2024-12-09 05:26:16.329299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:31:24.889 [2024-12-09 05:26:16.364540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:31:24.889 [2024-12-09 05:26:16.364585] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:31:24.889 [2024-12-09 05:26:16.364592] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:31:24.889 [2024-12-09 05:26:16.364597] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:31:24.889 [2024-12-09 05:26:16.364602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:31:24.889 [2024-12-09 05:26:16.365277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:31:24.890 [2024-12-09 05:26:16.365438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:31:25.455 05:26:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:31:25.455 05:26:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 01:31:25.455 05:26:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:31:25.455 05:26:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 01:31:25.455 05:26:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:31:25.713 05:26:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:31:25.713 05:26:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:31:25.713 05:26:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:31:25.713 [2024-12-09 05:26:17.288971] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:31:25.713 05:26:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:31:25.971 Malloc0 01:31:25.971 05:26:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:31:26.230 05:26:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:31:26.488 05:26:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:31:26.746 [2024-12-09 05:26:18.150397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:31:26.746 05:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 01:31:26.746 05:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96439 01:31:26.746 05:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96439 /var/tmp/bdevperf.sock 01:31:26.746 05:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96439 ']' 01:31:26.746 05:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:31:26.746 05:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 01:31:26.746 05:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:31:26.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:31:26.746 05:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 01:31:26.746 05:26:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:31:26.746 [2024-12-09 05:26:18.198743] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:31:26.746 [2024-12-09 05:26:18.198799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96439 ] 01:31:26.746 [2024-12-09 05:26:18.336524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:31:27.004 [2024-12-09 05:26:18.372966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:31:27.569 05:26:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:31:27.569 05:26:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 01:31:27.569 05:26:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:31:27.825 05:26:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 01:31:28.083 NVMe0n1 01:31:28.083 05:26:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96487 01:31:28.083 05:26:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:31:28.083 05:26:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 01:31:28.083 Running I/O for 10 seconds... 01:31:29.014 05:26:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:31:29.275 12941.00 IOPS, 50.55 MiB/s [2024-12-09T05:26:20.892Z] [2024-12-09 05:26:20.768193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.275 [2024-12-09 05:26:20.768226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.275 [2024-12-09 05:26:20.768232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.275 [2024-12-09 05:26:20.768236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.275 [2024-12-09 05:26:20.768240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.275 [2024-12-09 05:26:20.768243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.275 [2024-12-09 05:26:20.768249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.275 [2024-12-09 05:26:20.768252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.275 [2024-12-09 05:26:20.768256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.275 [2024-12-09 05:26:20.768260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.275 [2024-12-09 05:26:20.768263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.275 [2024-12-09 05:26:20.768267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.768517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc510 is same with the state(6) to be set 01:31:29.276 [2024-12-09 05:26:20.770081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.276 [2024-12-09 05:26:20.770115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.276 [2024-12-09 05:26:20.770130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.276 [2024-12-09 05:26:20.770138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.276 [2024-12-09 05:26:20.770146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.276 [2024-12-09 05:26:20.770153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.276 [2024-12-09 05:26:20.770160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.276 [2024-12-09 05:26:20.770166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.276 [2024-12-09 05:26:20.770173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:114840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.276 [2024-12-09 05:26:20.770179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.276 [2024-12-09 05:26:20.770187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:114848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.276 [2024-12-09 05:26:20.770193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.276 [2024-12-09 05:26:20.770200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.276 [2024-12-09 05:26:20.770206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.277 [2024-12-09 05:26:20.770219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.277 [2024-12-09 05:26:20.770232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.277 [2024-12-09 05:26:20.770245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.277 [2024-12-09 05:26:20.770258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.277 [2024-12-09 05:26:20.770270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.277 [2024-12-09 05:26:20.770283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.277 [2024-12-09 05:26:20.770296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.277 [2024-12-09 05:26:20.770309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.277 [2024-12-09 05:26:20.770323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:115200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:115216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:115240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:115256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:115264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:115272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:115288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:115304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:115320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.277 [2024-12-09 05:26:20.770544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:115328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:115336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:115344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:115352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:115376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:115392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:115400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:115408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:115424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.277 [2024-12-09 05:26:20.770724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.277 [2024-12-09 05:26:20.770731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:115432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.770737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.770744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.770750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.770758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:115448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.770763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.770771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.770777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.770784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.770790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.770797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.770803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.770810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:115480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.770815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.770823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:115488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.770828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.770836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.770841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.770848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.770854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.770861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.278 [2024-12-09 05:26:20.770867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.770875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.278 [2024-12-09 05:26:20.770880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.770888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.278 [2024-12-09 05:26:20.770894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.770901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.278 [2024-12-09 05:26:20.770906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.770914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.278 [2024-12-09 05:26:20.770919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.770927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.278 [2024-12-09 05:26:20.770932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.770940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.278 [2024-12-09 05:26:20.770945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.770958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.278 [2024-12-09 05:26:20.770965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.770972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.278 [2024-12-09 05:26:20.770978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.770985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.278 [2024-12-09 05:26:20.770991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.770998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.278 [2024-12-09 05:26:20.771004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.771011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.278 [2024-12-09 05:26:20.771017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.771024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.278 [2024-12-09 05:26:20.771030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.771037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.278 [2024-12-09 05:26:20.771043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.771050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.278 [2024-12-09 05:26:20.771056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.771063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:115064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.278 [2024-12-09 05:26:20.771069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.771076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:29.278 [2024-12-09 05:26:20.771082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.771089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:115512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.771095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.771102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:115520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.771107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.771114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:115528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.771121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.771128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.771134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.771141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:115544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.771146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.771153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.771159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.771168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.771174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.771181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:115568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.771187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.771194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:115576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.771200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.771208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:115584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.771213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.771220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:115592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.771226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.771233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:115600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.771239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.771246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:115608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.278 [2024-12-09 05:26:20.771251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.278 [2024-12-09 05:26:20.771258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.279 [2024-12-09 05:26:20.771264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.279 [2024-12-09 05:26:20.771277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:115632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.279 [2024-12-09 05:26:20.771290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.279 [2024-12-09 05:26:20.771303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.279 [2024-12-09 05:26:20.771316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.279 [2024-12-09 05:26:20.771329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:115664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.279 [2024-12-09 05:26:20.771342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.279 [2024-12-09 05:26:20.771355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.279 [2024-12-09 05:26:20.771367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.279 [2024-12-09 05:26:20.771382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.279 [2024-12-09 05:26:20.771395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.279 [2024-12-09 05:26:20.771409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.279 [2024-12-09 05:26:20.771421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.279 [2024-12-09 05:26:20.771434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.279 [2024-12-09 05:26:20.771447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.279 [2024-12-09 05:26:20.771460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:115744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.279 [2024-12-09 05:26:20.771473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:115752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.279 [2024-12-09 05:26:20.771485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:29.279 [2024-12-09 05:26:20.771498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.279 [2024-12-09 05:26:20.771522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115768 len:8 PRP1 0x0 PRP2 0x0 01:31:29.279 [2024-12-09 05:26:20.771528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:29.279 [2024-12-09 05:26:20.771541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.279 [2024-12-09 05:26:20.771546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115776 len:8 PRP1 0x0 PRP2 0x0 01:31:29.279 [2024-12-09 05:26:20.771551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:29.279 [2024-12-09 05:26:20.771570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.279 [2024-12-09 05:26:20.771575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115784 len:8 PRP1 0x0 PRP2 0x0 01:31:29.279 [2024-12-09 05:26:20.771580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:29.279 [2024-12-09 05:26:20.771592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.279 [2024-12-09 05:26:20.771597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115792 len:8 PRP1 0x0 PRP2 0x0 01:31:29.279 [2024-12-09 05:26:20.771603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:29.279 [2024-12-09 05:26:20.771613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.279 [2024-12-09 05:26:20.771618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115800 len:8 PRP1 0x0 PRP2 0x0 01:31:29.279 [2024-12-09 05:26:20.771623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:29.279 [2024-12-09 05:26:20.771634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.279 [2024-12-09 05:26:20.771638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115808 len:8 PRP1 0x0 PRP2 0x0 01:31:29.279 [2024-12-09 05:26:20.771644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:29.279 [2024-12-09 05:26:20.771654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.279 [2024-12-09 05:26:20.771659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115816 len:8 PRP1 0x0 PRP2 0x0 01:31:29.279 [2024-12-09 05:26:20.771664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:29.279 [2024-12-09 05:26:20.771674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.279 [2024-12-09 05:26:20.771678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115824 len:8 PRP1 0x0 PRP2 0x0 01:31:29.279 [2024-12-09 05:26:20.771684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:29.279 [2024-12-09 05:26:20.771694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.279 [2024-12-09 05:26:20.771699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115080 len:8 PRP1 0x0 PRP2 0x0 01:31:29.279 [2024-12-09 05:26:20.771704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:29.279 [2024-12-09 05:26:20.771715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.279 [2024-12-09 05:26:20.771719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115088 len:8 PRP1 0x0 PRP2 0x0 01:31:29.279 [2024-12-09 05:26:20.771724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:29.279 [2024-12-09 05:26:20.771734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.279 [2024-12-09 05:26:20.771739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115096 len:8 PRP1 0x0 PRP2 0x0 01:31:29.279 [2024-12-09 05:26:20.771744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:29.279 [2024-12-09 05:26:20.771755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.279 [2024-12-09 05:26:20.771763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115104 len:8 PRP1 0x0 PRP2 0x0 01:31:29.279 [2024-12-09 05:26:20.771768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.279 [2024-12-09 05:26:20.771774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:29.279 [2024-12-09 05:26:20.771778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.279 [2024-12-09 05:26:20.771783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115112 len:8 PRP1 0x0 PRP2 0x0 01:31:29.280 [2024-12-09 05:26:20.771788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.280 [2024-12-09 05:26:20.771794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:29.280 [2024-12-09 05:26:20.771798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.280 [2024-12-09 05:26:20.771803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115120 len:8 PRP1 0x0 PRP2 0x0 01:31:29.280 [2024-12-09 05:26:20.771809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.280 [2024-12-09 05:26:20.771814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:29.280 [2024-12-09 05:26:20.771819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.280 [2024-12-09 05:26:20.771824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115128 len:8 PRP1 0x0 PRP2 0x0 01:31:29.280 [2024-12-09 05:26:20.771829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.280 [2024-12-09 05:26:20.771835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:29.280 [2024-12-09 05:26:20.771839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.280 [2024-12-09 05:26:20.771843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115136 len:8 PRP1 0x0 PRP2 0x0 01:31:29.280 [2024-12-09 05:26:20.771849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.280 [2024-12-09 05:26:20.771855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:29.280 [2024-12-09 05:26:20.771859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.280 [2024-12-09 05:26:20.771863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115144 len:8 PRP1 0x0 PRP2 0x0 01:31:29.280 [2024-12-09 05:26:20.771868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.280 [2024-12-09 05:26:20.771874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:29.280 [2024-12-09 05:26:20.771878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.280 [2024-12-09 05:26:20.777172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115152 len:8 PRP1 0x0 PRP2 0x0 01:31:29.280 [2024-12-09 05:26:20.777203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.280 [2024-12-09 05:26:20.777215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:29.280 [2024-12-09 05:26:20.777220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.280 [2024-12-09 05:26:20.777226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115160 len:8 PRP1 0x0 PRP2 0x0 01:31:29.280 [2024-12-09 05:26:20.777232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.280 [2024-12-09 05:26:20.777237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:29.280 [2024-12-09 05:26:20.777242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.280 [2024-12-09 05:26:20.777248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115168 len:8 PRP1 0x0 PRP2 0x0 01:31:29.280 [2024-12-09 05:26:20.777254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.280 [2024-12-09 05:26:20.777259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:29.280 [2024-12-09 05:26:20.777263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.280 [2024-12-09 05:26:20.777268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115176 len:8 PRP1 0x0 PRP2 0x0 01:31:29.280 [2024-12-09 05:26:20.777273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.280 [2024-12-09 05:26:20.777279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:29.280 [2024-12-09 05:26:20.777283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.280 [2024-12-09 05:26:20.777287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115184 len:8 PRP1 0x0 PRP2 0x0 01:31:29.280 [2024-12-09 05:26:20.777293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.280 [2024-12-09 05:26:20.777298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:29.280 [2024-12-09 05:26:20.777302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:29.280 [2024-12-09 05:26:20.777307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115192 len:8 PRP1 0x0 PRP2 0x0 01:31:29.280 [2024-12-09 05:26:20.777312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.280 [2024-12-09 05:26:20.777431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:31:29.280 [2024-12-09 05:26:20.777441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.280 [2024-12-09 05:26:20.777448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:31:29.280 [2024-12-09 05:26:20.777453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.280 [2024-12-09 05:26:20.777460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:31:29.280 [2024-12-09 05:26:20.777465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.280 [2024-12-09 05:26:20.777471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:31:29.280 [2024-12-09 05:26:20.777477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:29.280 [2024-12-09 05:26:20.777482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157ff50 is same with the state(6) to be set 01:31:29.280 [2024-12-09 05:26:20.777705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:31:29.280 [2024-12-09 05:26:20.777727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x157ff50 (9): Bad file descriptor 01:31:29.280 [2024-12-09 05:26:20.777797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:31:29.280 [2024-12-09 05:26:20.777812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x157ff50 with addr=10.0.0.3, port=4420 01:31:29.280 [2024-12-09 05:26:20.777819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157ff50 is same with the state(6) to be set 01:31:29.280 [2024-12-09 05:26:20.777830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x157ff50 (9): Bad file descriptor 01:31:29.280 [2024-12-09 05:26:20.777840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 01:31:29.280 [2024-12-09 05:26:20.777845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 01:31:29.280 [2024-12-09 05:26:20.777851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 01:31:29.280 [2024-12-09 05:26:20.777858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 01:31:29.280 [2024-12-09 05:26:20.777864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:31:29.280 05:26:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 01:31:31.213 7175.50 IOPS, 28.03 MiB/s [2024-12-09T05:26:22.830Z] 4783.67 IOPS, 18.69 MiB/s [2024-12-09T05:26:22.830Z] [2024-12-09 05:26:22.778067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:31:31.213 [2024-12-09 05:26:22.778099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x157ff50 with addr=10.0.0.3, port=4420 01:31:31.213 [2024-12-09 05:26:22.778106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157ff50 is same with the state(6) to be set 01:31:31.213 [2024-12-09 05:26:22.778119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x157ff50 (9): Bad file descriptor 01:31:31.213 [2024-12-09 05:26:22.778127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 01:31:31.213 [2024-12-09 05:26:22.778132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 01:31:31.213 [2024-12-09 05:26:22.778137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 01:31:31.213 [2024-12-09 05:26:22.778142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 01:31:31.213 [2024-12-09 05:26:22.778147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:31:31.213 05:26:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 01:31:31.213 05:26:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:31:31.213 05:26:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 01:31:31.488 05:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 01:31:31.488 05:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 01:31:31.488 05:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 01:31:31.488 05:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 01:31:31.748 05:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 01:31:31.748 05:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 01:31:33.255 3587.75 IOPS, 14.01 MiB/s [2024-12-09T05:26:24.872Z] 2870.20 IOPS, 11.21 MiB/s [2024-12-09T05:26:24.872Z] [2024-12-09 05:26:24.778374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:31:33.255 [2024-12-09 05:26:24.778402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x157ff50 with addr=10.0.0.3, port=4420 01:31:33.255 [2024-12-09 05:26:24.778408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157ff50 is same with the state(6) to be set 01:31:33.255 [2024-12-09 05:26:24.778420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x157ff50 (9): Bad file descriptor 01:31:33.255 [2024-12-09 05:26:24.778429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 01:31:33.255 [2024-12-09 05:26:24.778433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 01:31:33.255 [2024-12-09 05:26:24.778439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 01:31:33.255 [2024-12-09 05:26:24.778445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 01:31:33.255 [2024-12-09 05:26:24.778450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:31:35.129 2391.83 IOPS, 9.34 MiB/s [2024-12-09T05:26:27.006Z] 2050.14 IOPS, 8.01 MiB/s [2024-12-09T05:26:27.006Z] [2024-12-09 05:26:26.778484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 01:31:35.389 [2024-12-09 05:26:26.778598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 01:31:35.389 [2024-12-09 05:26:26.778608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 01:31:35.389 [2024-12-09 05:26:26.778614] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 01:31:35.389 [2024-12-09 05:26:26.778621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 01:31:35.389 01:31:35.389 Latency(us) 01:31:35.389 [2024-12-09T05:26:27.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:31:35.389 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:31:35.389 Verification LBA range: start 0x0 length 0x4000 01:31:35.389 NVMe0n1 : 7.12 2014.97 7.87 55.04 0.00 61702.22 256.79 6013986.66 01:31:35.389 [2024-12-09T05:26:27.006Z] =================================================================================================================== 01:31:35.389 [2024-12-09T05:26:27.006Z] Total : 2014.97 7.87 55.04 0.00 61702.22 256.79 6013986.66 01:31:35.389 { 01:31:35.389 "results": [ 01:31:35.389 { 01:31:35.389 "job": "NVMe0n1", 01:31:35.389 "core_mask": "0x4", 01:31:35.389 "workload": "verify", 01:31:35.389 "status": "finished", 01:31:35.389 "verify_range": { 01:31:35.389 "start": 0, 01:31:35.389 "length": 16384 01:31:35.389 }, 01:31:35.389 "queue_depth": 128, 01:31:35.389 "io_size": 4096, 01:31:35.389 "runtime": 7.122185, 01:31:35.389 "iops": 2014.9715291023751, 01:31:35.389 "mibps": 7.870982535556153, 01:31:35.389 "io_failed": 392, 01:31:35.389 "io_timeout": 0, 01:31:35.389 "avg_latency_us": 61702.21621358767, 01:31:35.389 "min_latency_us": 256.7876923076923, 01:31:35.389 "max_latency_us": 6013986.658461538 01:31:35.389 } 01:31:35.389 ], 01:31:35.389 "core_count": 1 01:31:35.389 } 01:31:36.771 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 01:31:36.771 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:31:36.771 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 01:31:37.030 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 01:31:37.030 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 01:31:37.030 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 01:31:37.030 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 01:31:37.030 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 01:31:37.030 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 96487 01:31:37.030 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96439 01:31:37.030 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96439 ']' 01:31:37.030 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96439 01:31:37.030 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 01:31:37.030 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:31:37.030 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96439 01:31:37.291 killing process with pid 96439 01:31:37.291 Received shutdown signal, test time was about 8.991573 seconds 01:31:37.291 01:31:37.291 Latency(us) 01:31:37.291 [2024-12-09T05:26:28.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:31:37.291 [2024-12-09T05:26:28.908Z] =================================================================================================================== 01:31:37.291 [2024-12-09T05:26:28.908Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:31:37.291 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:31:37.291 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:31:37.291 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96439' 01:31:37.291 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96439 01:31:37.291 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96439 01:31:37.291 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:31:37.553 [2024-12-09 05:26:28.968337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:31:37.553 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 01:31:37.553 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96645 01:31:37.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:31:37.553 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96645 /var/tmp/bdevperf.sock 01:31:37.553 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96645 ']' 01:31:37.553 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:31:37.553 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 01:31:37.553 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:31:37.553 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 01:31:37.553 05:26:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:31:37.553 [2024-12-09 05:26:29.025182] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:31:37.553 [2024-12-09 05:26:29.025238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96645 ] 01:31:37.553 [2024-12-09 05:26:29.157619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:31:37.814 [2024-12-09 05:26:29.187917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:31:38.385 05:26:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:31:38.386 05:26:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 01:31:38.386 05:26:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:31:38.646 05:26:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 01:31:38.905 NVMe0n1 01:31:38.905 05:26:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96687 01:31:38.905 05:26:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:31:38.905 05:26:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 01:31:38.905 Running I/O for 10 seconds... 01:31:39.845 05:26:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:31:40.107 12870.00 IOPS, 50.27 MiB/s [2024-12-09T05:26:31.724Z] [2024-12-09 05:26:31.523473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.107 [2024-12-09 05:26:31.523618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.107 [2024-12-09 05:26:31.523682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.107 [2024-12-09 05:26:31.523704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.107 [2024-12-09 05:26:31.523726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.107 [2024-12-09 05:26:31.523747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.107 [2024-12-09 05:26:31.523793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.107 [2024-12-09 05:26:31.523819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.107 [2024-12-09 05:26:31.523841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:115000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.107 [2024-12-09 05:26:31.523861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.107 [2024-12-09 05:26:31.523883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.107 [2024-12-09 05:26:31.523924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.107 [2024-12-09 05:26:31.523975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.107 [2024-12-09 05:26:31.523998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.107 [2024-12-09 05:26:31.524044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:115024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.107 [2024-12-09 05:26:31.524065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.524087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:115032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.108 [2024-12-09 05:26:31.524132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.524154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.524175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.524225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:114208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.524247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.524301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.524324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.524346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.524390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.524413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.524434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.524486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.524507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.524528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:114248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.524580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.524608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.524770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.524792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.524812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.524833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.524853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.524900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.524923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.524944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.524964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.524986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:114336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:114344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:114368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:114392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:114448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:114456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:114472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:114480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:114488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:114496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.108 [2024-12-09 05:26:31.525476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.108 [2024-12-09 05:26:31.525482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:114576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:114592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:114600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:114632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:114768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.109 [2024-12-09 05:26:31.525833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:115040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.109 [2024-12-09 05:26:31.525843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:115048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.109 [2024-12-09 05:26:31.525853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:115056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.109 [2024-12-09 05:26:31.525863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:115064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.109 [2024-12-09 05:26:31.525873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.109 [2024-12-09 05:26:31.525882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.109 [2024-12-09 05:26:31.525889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:115080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.109 [2024-12-09 05:26:31.525893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.525899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:115088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.110 [2024-12-09 05:26:31.525903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.525909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:115096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.110 [2024-12-09 05:26:31.525913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.525919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.110 [2024-12-09 05:26:31.525924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.525930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:115112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.110 [2024-12-09 05:26:31.525934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.525940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:115120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.110 [2024-12-09 05:26:31.525944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.525950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:115128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.110 [2024-12-09 05:26:31.525954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.525960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.110 [2024-12-09 05:26:31.525964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.525970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:115144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.110 [2024-12-09 05:26:31.525974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.525980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:115152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.110 [2024-12-09 05:26:31.525985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.525991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.110 [2024-12-09 05:26:31.525995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.110 [2024-12-09 05:26:31.526005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.110 [2024-12-09 05:26:31.526015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.110 [2024-12-09 05:26:31.526025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.110 [2024-12-09 05:26:31.526034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.110 [2024-12-09 05:26:31.526045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.110 [2024-12-09 05:26:31.526055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.110 [2024-12-09 05:26:31.526069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.110 [2024-12-09 05:26:31.526079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.110 [2024-12-09 05:26:31.526089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.110 [2024-12-09 05:26:31.526099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.110 [2024-12-09 05:26:31.526109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.110 [2024-12-09 05:26:31.526118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:115200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.110 [2024-12-09 05:26:31.526128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.110 [2024-12-09 05:26:31.526138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:115216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:40.110 [2024-12-09 05:26:31.526149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:114840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.110 [2024-12-09 05:26:31.526159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:114848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.110 [2024-12-09 05:26:31.526169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.110 [2024-12-09 05:26:31.526179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.110 [2024-12-09 05:26:31.526189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.110 [2024-12-09 05:26:31.526199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.110 [2024-12-09 05:26:31.526208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.110 [2024-12-09 05:26:31.526218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.110 [2024-12-09 05:26:31.526230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.110 [2024-12-09 05:26:31.526240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.110 [2024-12-09 05:26:31.526250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.110 [2024-12-09 05:26:31.526259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.110 [2024-12-09 05:26:31.526269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.110 [2024-12-09 05:26:31.526279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.110 [2024-12-09 05:26:31.526291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.110 [2024-12-09 05:26:31.526297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:40.111 [2024-12-09 05:26:31.526302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.111 [2024-12-09 05:26:31.526307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657820 is same with the state(6) to be set 01:31:40.111 [2024-12-09 05:26:31.526314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:40.111 [2024-12-09 05:26:31.526317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:40.111 [2024-12-09 05:26:31.526321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114960 len:8 PRP1 0x0 PRP2 0x0 01:31:40.111 [2024-12-09 05:26:31.526326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:40.111 [2024-12-09 05:26:31.526534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:31:40.111 [2024-12-09 05:26:31.526591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ebf50 (9): Bad file descriptor 01:31:40.111 [2024-12-09 05:26:31.526647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:31:40.111 [2024-12-09 05:26:31.526656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ebf50 with addr=10.0.0.3, port=4420 01:31:40.111 [2024-12-09 05:26:31.526661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ebf50 is same with the state(6) to be set 01:31:40.111 [2024-12-09 05:26:31.526669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ebf50 (9): Bad file descriptor 01:31:40.111 [2024-12-09 05:26:31.526677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:31:40.111 [2024-12-09 05:26:31.526681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:31:40.111 [2024-12-09 05:26:31.526686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:31:40.111 [2024-12-09 05:26:31.526691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:31:40.111 [2024-12-09 05:26:31.526697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:31:40.111 05:26:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 01:31:41.050 7137.50 IOPS, 27.88 MiB/s [2024-12-09T05:26:32.667Z] [2024-12-09 05:26:32.526777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:31:41.050 [2024-12-09 05:26:32.526860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ebf50 with addr=10.0.0.3, port=4420 01:31:41.050 [2024-12-09 05:26:32.526910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ebf50 is same with the state(6) to be set 01:31:41.050 [2024-12-09 05:26:32.526940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ebf50 (9): Bad file descriptor 01:31:41.050 [2024-12-09 05:26:32.526967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:31:41.050 [2024-12-09 05:26:32.526987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:31:41.050 [2024-12-09 05:26:32.527008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:31:41.050 [2024-12-09 05:26:32.527023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:31:41.050 [2024-12-09 05:26:32.527128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:31:41.050 05:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:31:41.309 [2024-12-09 05:26:32.731012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:31:41.309 05:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 96687 01:31:42.135 4758.33 IOPS, 18.59 MiB/s [2024-12-09T05:26:33.752Z] [2024-12-09 05:26:33.537788] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 01:31:44.036 3568.75 IOPS, 13.94 MiB/s [2024-12-09T05:26:36.596Z] 5164.20 IOPS, 20.17 MiB/s [2024-12-09T05:26:37.539Z] 6474.67 IOPS, 25.29 MiB/s [2024-12-09T05:26:38.479Z] 7385.57 IOPS, 28.85 MiB/s [2024-12-09T05:26:39.420Z] 8071.38 IOPS, 31.53 MiB/s [2024-12-09T05:26:40.804Z] 8622.22 IOPS, 33.68 MiB/s [2024-12-09T05:26:40.804Z] 9050.70 IOPS, 35.35 MiB/s 01:31:49.187 Latency(us) 01:31:49.187 [2024-12-09T05:26:40.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:31:49.187 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:31:49.187 Verification LBA range: start 0x0 length 0x4000 01:31:49.187 NVMe0n1 : 10.00 9052.25 35.36 0.00 0.00 14122.33 1392.64 3019898.88 01:31:49.187 [2024-12-09T05:26:40.804Z] =================================================================================================================== 01:31:49.187 [2024-12-09T05:26:40.804Z] Total : 9052.25 35.36 0.00 0.00 14122.33 1392.64 3019898.88 01:31:49.187 { 01:31:49.187 "results": [ 01:31:49.187 { 01:31:49.187 "job": "NVMe0n1", 01:31:49.187 "core_mask": "0x4", 01:31:49.187 "workload": "verify", 01:31:49.187 "status": "finished", 01:31:49.187 "verify_range": { 01:31:49.187 "start": 0, 01:31:49.187 "length": 16384 01:31:49.187 }, 01:31:49.187 "queue_depth": 128, 01:31:49.187 "io_size": 4096, 01:31:49.187 "runtime": 10.003372, 01:31:49.187 "iops": 9052.247582115311, 01:31:49.187 "mibps": 35.360342117637934, 01:31:49.187 "io_failed": 0, 01:31:49.187 "io_timeout": 0, 01:31:49.187 "avg_latency_us": 14122.326642195943, 01:31:49.187 "min_latency_us": 1392.64, 01:31:49.187 "max_latency_us": 3019898.88 01:31:49.187 } 01:31:49.187 ], 01:31:49.187 "core_count": 1 01:31:49.187 } 01:31:49.187 05:26:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96809 01:31:49.187 05:26:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:31:49.187 05:26:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 01:31:49.187 Running I/O for 10 seconds... 01:31:50.138 05:26:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:31:50.138 13163.00 IOPS, 51.42 MiB/s [2024-12-09T05:26:41.755Z] [2024-12-09 05:26:41.633523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.138 [2024-12-09 05:26:41.633673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.633819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c22d40 is same with the state(6) to be set 01:31:50.139 [2024-12-09 05:26:41.634147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:116376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:116384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:116432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:116464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:116472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:116488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:116528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:116560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:116568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:116576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:116584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:116600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:116632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:116640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:116656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.139 [2024-12-09 05:26:41.634587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.139 [2024-12-09 05:26:41.634591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.140 [2024-12-09 05:26:41.634601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.140 [2024-12-09 05:26:41.634611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.140 [2024-12-09 05:26:41.634621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.140 [2024-12-09 05:26:41.634631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.140 [2024-12-09 05:26:41.634641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:116712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.140 [2024-12-09 05:26:41.634651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.140 [2024-12-09 05:26:41.634661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:116728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.140 [2024-12-09 05:26:41.634671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.140 [2024-12-09 05:26:41.634681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.140 [2024-12-09 05:26:41.634695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:116752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.140 [2024-12-09 05:26:41.634705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:116760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.140 [2024-12-09 05:26:41.634715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.140 [2024-12-09 05:26:41.634724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:116776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.140 [2024-12-09 05:26:41.634734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:116840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.634745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.140 [2024-12-09 05:26:41.634754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.140 [2024-12-09 05:26:41.634764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.140 [2024-12-09 05:26:41.634774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:116808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.140 [2024-12-09 05:26:41.634784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:116816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.140 [2024-12-09 05:26:41.634794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:116824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.140 [2024-12-09 05:26:41.634804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:31:50.140 [2024-12-09 05:26:41.634813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:116848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.634823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:116856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.634833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:116864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.634847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:116872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.634858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.634869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:116888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.634879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:116896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.634889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:116904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.634899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:116912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.634909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:116920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.634918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:116928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.634928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:116936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.634938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:116944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.634948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:116952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.634958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:116960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.634968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.634978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.634988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.634993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.634997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.635003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:116992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.635010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.635015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.635021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.635027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.635031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.635037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.635041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.635046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:117024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.635050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.635056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.635060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.635066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.635070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.635076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.635080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.635086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:117056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.635091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.635096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:117064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.635100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.635106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.635110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.635115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.635120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.635125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.635129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.635135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.635139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.635145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.635149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.635154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.635158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.635164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.635169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.635175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:117128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.635181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.635187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:117136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.140 [2024-12-09 05:26:41.635191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.140 [2024-12-09 05:26:41.635197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:117184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:117192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:117200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:117336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:117360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:31:50.141 [2024-12-09 05:26:41.635469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656610 is same with the state(6) to be set 01:31:50.141 [2024-12-09 05:26:41.635480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:31:50.141 [2024-12-09 05:26:41.635484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:31:50.141 [2024-12-09 05:26:41.635490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117368 len:8 PRP1 0x0 PRP2 0x0 01:31:50.141 [2024-12-09 05:26:41.635494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:31:50.141 [2024-12-09 05:26:41.635699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:31:50.141 [2024-12-09 05:26:41.635743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ebf50 (9): Bad file descriptor 01:31:50.141 [2024-12-09 05:26:41.635799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:31:50.141 [2024-12-09 05:26:41.635807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ebf50 with addr=10.0.0.3, port=4420 01:31:50.141 [2024-12-09 05:26:41.635812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ebf50 is same with the state(6) to be set 01:31:50.141 [2024-12-09 05:26:41.635820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ebf50 (9): Bad file descriptor 01:31:50.141 [2024-12-09 05:26:41.635828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 01:31:50.141 [2024-12-09 05:26:41.635832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 01:31:50.141 [2024-12-09 05:26:41.635837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 01:31:50.141 [2024-12-09 05:26:41.635842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 01:31:50.141 [2024-12-09 05:26:41.635847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:31:50.141 05:26:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 01:31:51.084 7272.00 IOPS, 28.41 MiB/s [2024-12-09T05:26:42.701Z] [2024-12-09 05:26:42.635930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:31:51.084 [2024-12-09 05:26:42.635964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ebf50 with addr=10.0.0.3, port=4420 01:31:51.084 [2024-12-09 05:26:42.635972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ebf50 is same with the state(6) to be set 01:31:51.084 [2024-12-09 05:26:42.635985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ebf50 (9): Bad file descriptor 01:31:51.084 [2024-12-09 05:26:42.635995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 01:31:51.084 [2024-12-09 05:26:42.635999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 01:31:51.084 [2024-12-09 05:26:42.636005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 01:31:51.084 [2024-12-09 05:26:42.636011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 01:31:51.084 [2024-12-09 05:26:42.636016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:31:52.028 4848.00 IOPS, 18.94 MiB/s [2024-12-09T05:26:43.645Z] [2024-12-09 05:26:43.636090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:31:52.028 [2024-12-09 05:26:43.636117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ebf50 with addr=10.0.0.3, port=4420 01:31:52.028 [2024-12-09 05:26:43.636124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ebf50 is same with the state(6) to be set 01:31:52.028 [2024-12-09 05:26:43.636134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ebf50 (9): Bad file descriptor 01:31:52.028 [2024-12-09 05:26:43.636143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 01:31:52.028 [2024-12-09 05:26:43.636147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 01:31:52.028 [2024-12-09 05:26:43.636152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 01:31:52.028 [2024-12-09 05:26:43.636157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 01:31:52.028 [2024-12-09 05:26:43.636163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:31:53.229 3636.00 IOPS, 14.20 MiB/s [2024-12-09T05:26:44.846Z] [2024-12-09 05:26:44.638837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:31:53.229 [2024-12-09 05:26:44.638867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ebf50 with addr=10.0.0.3, port=4420 01:31:53.229 [2024-12-09 05:26:44.638875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ebf50 is same with the state(6) to be set 01:31:53.229 [2024-12-09 05:26:44.639036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ebf50 (9): Bad file descriptor 01:31:53.229 [2024-12-09 05:26:44.639192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 01:31:53.229 [2024-12-09 05:26:44.639197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 01:31:53.229 [2024-12-09 05:26:44.639202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 01:31:53.229 [2024-12-09 05:26:44.639207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 01:31:53.229 [2024-12-09 05:26:44.639213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:31:53.229 05:26:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:31:53.490 [2024-12-09 05:26:44.846967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:31:53.490 05:26:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 96809 01:31:54.062 2908.80 IOPS, 11.36 MiB/s [2024-12-09T05:26:45.679Z] [2024-12-09 05:26:45.666737] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 01:31:55.998 4336.17 IOPS, 16.94 MiB/s [2024-12-09T05:26:48.556Z] 5618.71 IOPS, 21.95 MiB/s [2024-12-09T05:26:49.939Z] 6598.75 IOPS, 25.78 MiB/s [2024-12-09T05:26:50.878Z] 7327.78 IOPS, 28.62 MiB/s [2024-12-09T05:26:50.878Z] 7916.40 IOPS, 30.92 MiB/s 01:31:59.261 Latency(us) 01:31:59.261 [2024-12-09T05:26:50.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:31:59.261 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:31:59.261 Verification LBA range: start 0x0 length 0x4000 01:31:59.261 NVMe0n1 : 10.00 7916.97 30.93 5612.27 0.00 9441.08 400.15 3019898.88 01:31:59.261 [2024-12-09T05:26:50.878Z] =================================================================================================================== 01:31:59.261 [2024-12-09T05:26:50.878Z] Total : 7916.97 30.93 5612.27 0.00 9441.08 0.00 3019898.88 01:31:59.261 { 01:31:59.261 "results": [ 01:31:59.261 { 01:31:59.261 "job": "NVMe0n1", 01:31:59.261 "core_mask": "0x4", 01:31:59.261 "workload": "verify", 01:31:59.261 "status": "finished", 01:31:59.261 "verify_range": { 01:31:59.261 "start": 0, 01:31:59.261 "length": 16384 01:31:59.261 }, 01:31:59.261 "queue_depth": 128, 01:31:59.261 "io_size": 4096, 01:31:59.261 "runtime": 10.003444, 01:31:59.261 "iops": 7916.973394362982, 01:31:59.261 "mibps": 30.925677321730397, 01:31:59.261 "io_failed": 56142, 01:31:59.261 "io_timeout": 0, 01:31:59.261 "avg_latency_us": 9441.07731352666, 01:31:59.261 "min_latency_us": 400.1476923076923, 01:31:59.261 "max_latency_us": 3019898.88 01:31:59.261 } 01:31:59.261 ], 01:31:59.261 "core_count": 1 01:31:59.261 } 01:31:59.261 05:26:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96645 01:31:59.261 05:26:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96645 ']' 01:31:59.261 05:26:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96645 01:31:59.261 05:26:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 01:31:59.261 05:26:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:31:59.261 05:26:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96645 01:31:59.261 05:26:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:31:59.261 killing process with pid 96645 01:31:59.261 Received shutdown signal, test time was about 10.000000 seconds 01:31:59.261 01:31:59.261 Latency(us) 01:31:59.261 [2024-12-09T05:26:50.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:31:59.261 [2024-12-09T05:26:50.878Z] =================================================================================================================== 01:31:59.261 [2024-12-09T05:26:50.878Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:31:59.261 05:26:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:31:59.261 05:26:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96645' 01:31:59.261 05:26:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96645 01:31:59.261 05:26:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96645 01:31:59.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:31:59.261 05:26:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96935 01:31:59.261 05:26:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96935 /var/tmp/bdevperf.sock 01:31:59.261 05:26:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96935 ']' 01:31:59.261 05:26:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:31:59.261 05:26:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 01:31:59.261 05:26:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:31:59.261 05:26:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 01:31:59.261 05:26:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:31:59.261 05:26:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 01:31:59.261 [2024-12-09 05:26:50.749011] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:31:59.261 [2024-12-09 05:26:50.749217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96935 ] 01:31:59.520 [2024-12-09 05:26:50.884148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:31:59.520 [2024-12-09 05:26:50.914522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:32:00.088 05:26:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:32:00.088 05:26:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 01:32:00.088 05:26:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96935 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 01:32:00.088 05:26:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96963 01:32:00.088 05:26:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 01:32:00.347 05:26:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 01:32:00.605 NVMe0n1 01:32:00.605 05:26:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=97017 01:32:00.605 05:26:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:32:00.605 05:26:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 01:32:00.605 Running I/O for 10 seconds... 01:32:01.546 05:26:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:32:01.808 24752.00 IOPS, 96.69 MiB/s [2024-12-09T05:26:53.425Z] [2024-12-09 05:26:53.301916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c258f0 is same with the state(6) to be set 01:32:01.808 [2024-12-09 05:26:53.302849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.302870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.302881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.302886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.302892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.302898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.302904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.302908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.302914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.302918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.302924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.302928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.302934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.302939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.302944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.302949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.302954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.302959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.302965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.302969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.302975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.302979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.302985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.302989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.302995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.302999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.303005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.303009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.303015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.303019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.303025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.303029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.303035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.303041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.303047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.303051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.303057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.303062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.303067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.303072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.303077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.303082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.303088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.303092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.303098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.303102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.808 [2024-12-09 05:26:53.303108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.808 [2024-12-09 05:26:53.303112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:119032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:115560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.809 [2024-12-09 05:26:53.303476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.809 [2024-12-09 05:26:53.303481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:54464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.810 [2024-12-09 05:26:53.303858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.810 [2024-12-09 05:26:53.303864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.303870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.303875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.303880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.303885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.303892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.303896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.303901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.303906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.303912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.303916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.303922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.303926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.303932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.303936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.303942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.303946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.303952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.303956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.303962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.303966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.303971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.303976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.303982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.303986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.303991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.303995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.304001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.304005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.304011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:32064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.304015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.304021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.304028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.304034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.304039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.304044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.304049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.304056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.304061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.304066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.304070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.304076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.304080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.304086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.304090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.304096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.304100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.304105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.304110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.304115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:119232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.304120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.304125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.304130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.304135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.304140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.304146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.304150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.304155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.304159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.304165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:32:01.811 [2024-12-09 05:26:53.304169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.304184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:32:01.811 [2024-12-09 05:26:53.304189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:32:01.811 [2024-12-09 05:26:53.304193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121496 len:8 PRP1 0x0 PRP2 0x0 01:32:01.811 [2024-12-09 05:26:53.304199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:32:01.811 [2024-12-09 05:26:53.304417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 01:32:01.811 [2024-12-09 05:26:53.304461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206ef50 (9): Bad file descriptor 01:32:01.811 [2024-12-09 05:26:53.304520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:32:01.811 [2024-12-09 05:26:53.304529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206ef50 with addr=10.0.0.3, port=4420 01:32:01.811 [2024-12-09 05:26:53.304535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206ef50 is same with the state(6) to be set 01:32:01.811 [2024-12-09 05:26:53.304543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206ef50 (9): Bad file descriptor 01:32:01.811 [2024-12-09 05:26:53.304551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 01:32:01.811 [2024-12-09 05:26:53.304555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 01:32:01.811 [2024-12-09 05:26:53.304570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 01:32:01.811 [2024-12-09 05:26:53.304575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 01:32:01.811 [2024-12-09 05:26:53.304581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 01:32:01.811 05:26:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 97017 01:32:03.694 13847.00 IOPS, 54.09 MiB/s [2024-12-09T05:26:55.311Z] 9231.33 IOPS, 36.06 MiB/s [2024-12-09T05:26:55.311Z] [2024-12-09 05:26:55.304782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:32:03.694 [2024-12-09 05:26:55.304910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206ef50 with addr=10.0.0.3, port=4420 01:32:03.694 [2024-12-09 05:26:55.304922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206ef50 is same with the state(6) to be set 01:32:03.694 [2024-12-09 05:26:55.304938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206ef50 (9): Bad file descriptor 01:32:03.694 [2024-12-09 05:26:55.304947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 01:32:03.694 [2024-12-09 05:26:55.304952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 01:32:03.694 [2024-12-09 05:26:55.304957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 01:32:03.694 [2024-12-09 05:26:55.304964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 01:32:03.694 [2024-12-09 05:26:55.304970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 01:32:06.005 6923.50 IOPS, 27.04 MiB/s [2024-12-09T05:26:57.622Z] 5538.80 IOPS, 21.64 MiB/s [2024-12-09T05:26:57.622Z] [2024-12-09 05:26:57.305084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:32:06.005 [2024-12-09 05:26:57.305112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206ef50 with addr=10.0.0.3, port=4420 01:32:06.005 [2024-12-09 05:26:57.305118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206ef50 is same with the state(6) to be set 01:32:06.005 [2024-12-09 05:26:57.305130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206ef50 (9): Bad file descriptor 01:32:06.005 [2024-12-09 05:26:57.305139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 01:32:06.005 [2024-12-09 05:26:57.305144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 01:32:06.005 [2024-12-09 05:26:57.305148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 01:32:06.005 [2024-12-09 05:26:57.305154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 01:32:06.005 [2024-12-09 05:26:57.305160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 01:32:07.868 4615.67 IOPS, 18.03 MiB/s [2024-12-09T05:26:59.485Z] 3956.29 IOPS, 15.45 MiB/s [2024-12-09T05:26:59.485Z] [2024-12-09 05:26:59.305333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 01:32:07.868 [2024-12-09 05:26:59.305354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 01:32:07.868 [2024-12-09 05:26:59.305360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 01:32:07.868 [2024-12-09 05:26:59.305365] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 01:32:07.868 [2024-12-09 05:26:59.305371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 01:32:08.799 3461.75 IOPS, 13.52 MiB/s 01:32:08.799 Latency(us) 01:32:08.799 [2024-12-09T05:27:00.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:32:08.799 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 01:32:08.799 NVMe0n1 : 8.12 3412.12 13.33 15.77 0.00 37294.97 2331.57 7020619.62 01:32:08.799 [2024-12-09T05:27:00.416Z] =================================================================================================================== 01:32:08.799 [2024-12-09T05:27:00.416Z] Total : 3412.12 13.33 15.77 0.00 37294.97 2331.57 7020619.62 01:32:08.799 { 01:32:08.799 "results": [ 01:32:08.799 { 01:32:08.799 "job": "NVMe0n1", 01:32:08.799 "core_mask": "0x4", 01:32:08.799 "workload": "randread", 01:32:08.799 "status": "finished", 01:32:08.799 "queue_depth": 128, 01:32:08.799 "io_size": 4096, 01:32:08.799 "runtime": 8.116371, 01:32:08.799 "iops": 3412.1160799574095, 01:32:08.799 "mibps": 13.328578437333631, 01:32:08.799 "io_failed": 128, 01:32:08.799 "io_timeout": 0, 01:32:08.799 "avg_latency_us": 37294.97227628384, 01:32:08.799 "min_latency_us": 2331.5692307692307, 01:32:08.799 "max_latency_us": 7020619.618461538 01:32:08.799 } 01:32:08.799 ], 01:32:08.799 "core_count": 1 01:32:08.799 } 01:32:08.799 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:32:08.799 Attaching 5 probes... 01:32:08.799 1274.207776: reset bdev controller NVMe0 01:32:08.799 1274.280850: reconnect bdev controller NVMe0 01:32:08.799 3274.514643: reconnect delay bdev controller NVMe0 01:32:08.799 3274.527610: reconnect bdev controller NVMe0 01:32:08.799 5274.814766: reconnect delay bdev controller NVMe0 01:32:08.799 5274.826699: reconnect bdev controller NVMe0 01:32:08.799 7275.116439: reconnect delay bdev controller NVMe0 01:32:08.799 7275.129105: reconnect bdev controller NVMe0 01:32:08.799 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 01:32:08.799 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 01:32:08.799 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 96963 01:32:08.799 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:32:08.799 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96935 01:32:08.799 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96935 ']' 01:32:08.799 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96935 01:32:08.799 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 01:32:08.799 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:32:08.799 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96935 01:32:08.799 killing process with pid 96935 01:32:08.799 Received shutdown signal, test time was about 8.174031 seconds 01:32:08.799 01:32:08.799 Latency(us) 01:32:08.799 [2024-12-09T05:27:00.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:32:08.799 [2024-12-09T05:27:00.416Z] =================================================================================================================== 01:32:08.799 [2024-12-09T05:27:00.416Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:32:08.799 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:32:08.799 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:32:08.799 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96935' 01:32:08.799 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96935 01:32:08.799 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96935 01:32:09.055 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:32:09.312 rmmod nvme_tcp 01:32:09.312 rmmod nvme_fabrics 01:32:09.312 rmmod nvme_keyring 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 96348 ']' 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 96348 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96348 ']' 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96348 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96348 01:32:09.312 killing process with pid 96348 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96348' 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96348 01:32:09.312 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96348 01:32:09.569 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:32:09.569 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:32:09.569 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:32:09.569 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 01:32:09.569 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:32:09.569 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 01:32:09.569 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 01:32:09.569 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:32:09.569 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:32:09.569 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:32:09.569 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:32:09.569 05:27:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:32:09.569 05:27:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:32:09.569 05:27:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:32:09.569 05:27:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:32:09.569 05:27:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:32:09.569 05:27:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:32:09.569 05:27:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:32:09.569 05:27:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:32:09.569 05:27:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:32:09.569 05:27:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:32:09.569 05:27:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:32:09.569 05:27:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 01:32:09.569 05:27:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:32:09.569 05:27:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:32:09.569 05:27:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:32:09.569 05:27:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 01:32:09.569 01:32:09.569 real 0m45.538s 01:32:09.569 user 2m14.383s 01:32:09.569 sys 0m3.329s 01:32:09.569 05:27:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 01:32:09.569 05:27:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:32:09.569 ************************************ 01:32:09.569 END TEST nvmf_timeout 01:32:09.569 ************************************ 01:32:09.827 05:27:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 01:32:09.827 05:27:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 01:32:09.827 01:32:09.827 real 5m30.484s 01:32:09.827 user 13m57.343s 01:32:09.827 sys 0m45.181s 01:32:09.827 05:27:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 01:32:09.827 05:27:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:32:09.827 ************************************ 01:32:09.827 END TEST nvmf_host 01:32:09.827 ************************************ 01:32:09.827 05:27:01 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 01:32:09.827 05:27:01 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 01:32:09.827 05:27:01 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 01:32:09.827 05:27:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:32:09.827 05:27:01 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 01:32:09.827 05:27:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:32:09.828 ************************************ 01:32:09.828 START TEST nvmf_target_core_interrupt_mode 01:32:09.828 ************************************ 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 01:32:09.828 * Looking for test storage... 01:32:09.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:32:09.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:32:09.828 --rc genhtml_branch_coverage=1 01:32:09.828 --rc genhtml_function_coverage=1 01:32:09.828 --rc genhtml_legend=1 01:32:09.828 --rc geninfo_all_blocks=1 01:32:09.828 --rc geninfo_unexecuted_blocks=1 01:32:09.828 01:32:09.828 ' 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:32:09.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:32:09.828 --rc genhtml_branch_coverage=1 01:32:09.828 --rc genhtml_function_coverage=1 01:32:09.828 --rc genhtml_legend=1 01:32:09.828 --rc geninfo_all_blocks=1 01:32:09.828 --rc geninfo_unexecuted_blocks=1 01:32:09.828 01:32:09.828 ' 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:32:09.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:32:09.828 --rc genhtml_branch_coverage=1 01:32:09.828 --rc genhtml_function_coverage=1 01:32:09.828 --rc genhtml_legend=1 01:32:09.828 --rc geninfo_all_blocks=1 01:32:09.828 --rc geninfo_unexecuted_blocks=1 01:32:09.828 01:32:09.828 ' 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:32:09.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:32:09.828 --rc genhtml_branch_coverage=1 01:32:09.828 --rc genhtml_function_coverage=1 01:32:09.828 --rc genhtml_legend=1 01:32:09.828 --rc geninfo_all_blocks=1 01:32:09.828 --rc geninfo_unexecuted_blocks=1 01:32:09.828 01:32:09.828 ' 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 01:32:09.828 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 01:32:09.829 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 01:32:09.829 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 01:32:09.829 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 01:32:09.829 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:32:09.829 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:32:09.829 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:32:09.829 ************************************ 01:32:09.829 START TEST nvmf_abort 01:32:09.829 ************************************ 01:32:09.829 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 01:32:10.088 * Looking for test storage... 01:32:10.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:32:10.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:32:10.088 --rc genhtml_branch_coverage=1 01:32:10.088 --rc genhtml_function_coverage=1 01:32:10.088 --rc genhtml_legend=1 01:32:10.088 --rc geninfo_all_blocks=1 01:32:10.088 --rc geninfo_unexecuted_blocks=1 01:32:10.088 01:32:10.088 ' 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:32:10.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:32:10.088 --rc genhtml_branch_coverage=1 01:32:10.088 --rc genhtml_function_coverage=1 01:32:10.088 --rc genhtml_legend=1 01:32:10.088 --rc geninfo_all_blocks=1 01:32:10.088 --rc geninfo_unexecuted_blocks=1 01:32:10.088 01:32:10.088 ' 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:32:10.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:32:10.088 --rc genhtml_branch_coverage=1 01:32:10.088 --rc genhtml_function_coverage=1 01:32:10.088 --rc genhtml_legend=1 01:32:10.088 --rc geninfo_all_blocks=1 01:32:10.088 --rc geninfo_unexecuted_blocks=1 01:32:10.088 01:32:10.088 ' 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:32:10.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:32:10.088 --rc genhtml_branch_coverage=1 01:32:10.088 --rc genhtml_function_coverage=1 01:32:10.088 --rc genhtml_legend=1 01:32:10.088 --rc geninfo_all_blocks=1 01:32:10.088 --rc geninfo_unexecuted_blocks=1 01:32:10.088 01:32:10.088 ' 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:32:10.088 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:32:10.089 Cannot find device "nvmf_init_br" 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:32:10.089 Cannot find device "nvmf_init_br2" 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:32:10.089 Cannot find device "nvmf_tgt_br" 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # true 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:32:10.089 Cannot find device "nvmf_tgt_br2" 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # true 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:32:10.089 Cannot find device "nvmf_init_br" 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:32:10.089 Cannot find device "nvmf_init_br2" 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:32:10.089 Cannot find device "nvmf_tgt_br" 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # true 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:32:10.089 Cannot find device "nvmf_tgt_br2" 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # true 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:32:10.089 Cannot find device "nvmf_br" 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # true 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:32:10.089 Cannot find device "nvmf_init_if" 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # true 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:32:10.089 Cannot find device "nvmf_init_if2" 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # true 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:32:10.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # true 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:32:10.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # true 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:32:10.089 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:32:10.348 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:32:10.348 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 01:32:10.348 01:32:10.348 --- 10.0.0.3 ping statistics --- 01:32:10.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:32:10.348 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:32:10.348 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:32:10.348 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 01:32:10.348 01:32:10.348 --- 10.0.0.4 ping statistics --- 01:32:10.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:32:10.348 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:32:10.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:32:10.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 01:32:10.348 01:32:10.348 --- 10.0.0.1 ping statistics --- 01:32:10.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:32:10.348 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:32:10.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:32:10.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 01:32:10.348 01:32:10.348 --- 10.0.0.2 ping statistics --- 01:32:10.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:32:10.348 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@461 -- # return 0 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 01:32:10.348 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:32:10.349 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 01:32:10.349 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:32:10.349 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=97429 01:32:10.349 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 97429 01:32:10.349 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 97429 ']' 01:32:10.349 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:32:10.349 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 01:32:10.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:32:10.349 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:32:10.349 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 01:32:10.349 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 01:32:10.349 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:32:10.349 [2024-12-09 05:27:01.876996] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:32:10.349 [2024-12-09 05:27:01.877749] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:32:10.349 [2024-12-09 05:27:01.877788] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:32:10.615 [2024-12-09 05:27:02.018154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:32:10.615 [2024-12-09 05:27:02.053700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:32:10.615 [2024-12-09 05:27:02.053729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:32:10.615 [2024-12-09 05:27:02.053735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:32:10.615 [2024-12-09 05:27:02.053740] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:32:10.615 [2024-12-09 05:27:02.053745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:32:10.615 [2024-12-09 05:27:02.054346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:32:10.615 [2024-12-09 05:27:02.054433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:32:10.615 [2024-12-09 05:27:02.054436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:32:10.615 [2024-12-09 05:27:02.107717] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:32:10.615 [2024-12-09 05:27:02.108498] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:32:10.615 [2024-12-09 05:27:02.108844] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:32:10.615 [2024-12-09 05:27:02.109279] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:32:11.180 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:32:11.180 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 01:32:11.180 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:32:11.180 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 01:32:11.180 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:32:11.180 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:32:11.180 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 01:32:11.180 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:11.180 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:32:11.180 [2024-12-09 05:27:02.795044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:32:11.437 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:11.437 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 01:32:11.437 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:11.437 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:32:11.437 Malloc0 01:32:11.437 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:11.437 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:32:11.437 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:11.437 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:32:11.437 Delay0 01:32:11.437 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:11.437 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:32:11.437 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:11.437 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:32:11.437 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:11.437 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 01:32:11.437 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:11.437 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:32:11.437 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:11.437 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:32:11.437 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:11.437 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:32:11.437 [2024-12-09 05:27:02.863077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:32:11.437 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:11.438 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:32:11.438 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:11.438 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:32:11.438 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:11.438 05:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 01:32:11.438 [2024-12-09 05:27:03.031895] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 01:32:13.961 Initializing NVMe Controllers 01:32:13.961 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 01:32:13.961 controller IO queue size 128 less than required 01:32:13.961 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 01:32:13.961 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 01:32:13.961 Initialization complete. Launching workers. 01:32:13.961 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31697 01:32:13.961 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31754, failed to submit 66 01:32:13.961 success 31697, unsuccessful 57, failed 0 01:32:13.961 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:32:13.961 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:13.961 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:32:13.961 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:13.961 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 01:32:13.961 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 01:32:13.961 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 01:32:13.961 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 01:32:13.961 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:32:13.961 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 01:32:13.961 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 01:32:13.961 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:32:13.961 rmmod nvme_tcp 01:32:13.961 rmmod nvme_fabrics 01:32:13.961 rmmod nvme_keyring 01:32:13.961 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:32:13.961 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 01:32:13.961 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 01:32:13.961 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 97429 ']' 01:32:13.961 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 97429 01:32:13.961 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 97429 ']' 01:32:13.961 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 97429 01:32:13.961 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97429 01:32:13.962 killing process with pid 97429 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97429' 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 97429 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 97429 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:32:13.962 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:32:14.220 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 01:32:14.220 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:32:14.220 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:32:14.220 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:32:14.220 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # return 0 01:32:14.220 01:32:14.220 real 0m4.201s 01:32:14.220 user 0m8.981s 01:32:14.220 sys 0m1.235s 01:32:14.220 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 01:32:14.220 ************************************ 01:32:14.220 END TEST nvmf_abort 01:32:14.220 ************************************ 01:32:14.220 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:32:14.220 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 01:32:14.220 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:32:14.220 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:32:14.220 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:32:14.220 ************************************ 01:32:14.220 START TEST nvmf_ns_hotplug_stress 01:32:14.220 ************************************ 01:32:14.220 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 01:32:14.220 * Looking for test storage... 01:32:14.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:32:14.220 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:32:14.220 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 01:32:14.220 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:32:14.220 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:32:14.220 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:32:14.220 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 01:32:14.220 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 01:32:14.220 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:32:14.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:32:14.221 --rc genhtml_branch_coverage=1 01:32:14.221 --rc genhtml_function_coverage=1 01:32:14.221 --rc genhtml_legend=1 01:32:14.221 --rc geninfo_all_blocks=1 01:32:14.221 --rc geninfo_unexecuted_blocks=1 01:32:14.221 01:32:14.221 ' 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:32:14.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:32:14.221 --rc genhtml_branch_coverage=1 01:32:14.221 --rc genhtml_function_coverage=1 01:32:14.221 --rc genhtml_legend=1 01:32:14.221 --rc geninfo_all_blocks=1 01:32:14.221 --rc geninfo_unexecuted_blocks=1 01:32:14.221 01:32:14.221 ' 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:32:14.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:32:14.221 --rc genhtml_branch_coverage=1 01:32:14.221 --rc genhtml_function_coverage=1 01:32:14.221 --rc genhtml_legend=1 01:32:14.221 --rc geninfo_all_blocks=1 01:32:14.221 --rc geninfo_unexecuted_blocks=1 01:32:14.221 01:32:14.221 ' 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:32:14.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:32:14.221 --rc genhtml_branch_coverage=1 01:32:14.221 --rc genhtml_function_coverage=1 01:32:14.221 --rc genhtml_legend=1 01:32:14.221 --rc geninfo_all_blocks=1 01:32:14.221 --rc geninfo_unexecuted_blocks=1 01:32:14.221 01:32:14.221 ' 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:32:14.221 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:32:14.222 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:32:14.480 Cannot find device "nvmf_init_br" 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:32:14.480 Cannot find device "nvmf_init_br2" 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:32:14.480 Cannot find device "nvmf_tgt_br" 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:32:14.480 Cannot find device "nvmf_tgt_br2" 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:32:14.480 Cannot find device "nvmf_init_br" 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:32:14.480 Cannot find device "nvmf_init_br2" 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:32:14.480 Cannot find device "nvmf_tgt_br" 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:32:14.480 Cannot find device "nvmf_tgt_br2" 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:32:14.480 Cannot find device "nvmf_br" 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:32:14.480 Cannot find device "nvmf_init_if" 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:32:14.480 Cannot find device "nvmf_init_if2" 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:32:14.480 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:32:14.480 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:32:14.480 05:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:32:14.480 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:32:14.480 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:32:14.480 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 01:32:14.480 01:32:14.480 --- 10.0.0.3 ping statistics --- 01:32:14.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:32:14.480 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 01:32:14.481 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:32:14.481 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:32:14.481 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 01:32:14.481 01:32:14.481 --- 10.0.0.4 ping statistics --- 01:32:14.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:32:14.481 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:32:14.481 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:32:14.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:32:14.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 01:32:14.481 01:32:14.481 --- 10.0.0.1 ping statistics --- 01:32:14.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:32:14.481 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 01:32:14.481 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:32:14.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:32:14.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 01:32:14.738 01:32:14.738 --- 10.0.0.2 ping statistics --- 01:32:14.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:32:14.738 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 01:32:14.738 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:32:14.738 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 01:32:14.738 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:32:14.738 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:32:14.738 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:32:14.738 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:32:14.738 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:32:14.738 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:32:14.738 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:32:14.738 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 01:32:14.738 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:32:14.738 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 01:32:14.738 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 01:32:14.738 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=97750 01:32:14.738 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 01:32:14.738 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 97750 01:32:14.738 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 97750 ']' 01:32:14.738 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:32:14.738 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 01:32:14.738 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:32:14.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:32:14.738 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 01:32:14.738 05:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 01:32:14.738 [2024-12-09 05:27:06.159264] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:32:14.738 [2024-12-09 05:27:06.160027] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:32:14.738 [2024-12-09 05:27:06.160070] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:32:14.738 [2024-12-09 05:27:06.301726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:32:14.738 [2024-12-09 05:27:06.336986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:32:14.738 [2024-12-09 05:27:06.337021] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:32:14.738 [2024-12-09 05:27:06.337028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:32:14.738 [2024-12-09 05:27:06.337033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:32:14.738 [2024-12-09 05:27:06.337037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:32:14.738 [2024-12-09 05:27:06.337762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:32:14.738 [2024-12-09 05:27:06.338587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:32:14.738 [2024-12-09 05:27:06.338557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:32:14.997 [2024-12-09 05:27:06.392429] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:32:14.997 [2024-12-09 05:27:06.393206] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:32:14.997 [2024-12-09 05:27:06.393793] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:32:14.997 [2024-12-09 05:27:06.393920] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:32:15.568 05:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:32:15.568 05:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 01:32:15.568 05:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:32:15.568 05:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 01:32:15.568 05:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 01:32:15.568 05:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:32:15.568 05:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 01:32:15.569 05:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:32:15.836 [2024-12-09 05:27:07.264285] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:32:15.836 05:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:32:16.092 05:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:32:16.092 [2024-12-09 05:27:07.640007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:32:16.092 05:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:32:16.350 05:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 01:32:16.607 Malloc0 01:32:16.607 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:32:16.863 Delay0 01:32:16.863 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:17.120 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 01:32:17.120 NULL1 01:32:17.120 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 01:32:17.377 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 01:32:17.377 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=97872 01:32:17.377 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:17.377 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:18.745 Read completed with error (sct=0, sc=11) 01:32:18.745 05:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:18.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:18.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:18.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:18.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:18.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:18.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:18.745 05:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 01:32:18.745 05:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 01:32:19.002 true 01:32:19.002 05:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:19.002 05:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:19.932 05:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:19.932 05:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 01:32:19.932 05:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 01:32:20.189 true 01:32:20.189 05:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:20.189 05:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:20.447 05:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:20.705 05:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 01:32:20.705 05:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 01:32:20.705 true 01:32:20.705 05:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:20.705 05:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:22.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:22.077 05:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:22.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:22.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:22.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:22.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:22.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:22.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:22.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:22.077 05:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 01:32:22.077 05:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 01:32:22.354 true 01:32:22.354 05:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:22.354 05:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:23.288 05:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:23.288 05:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 01:32:23.288 05:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 01:32:23.546 true 01:32:23.546 05:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:23.546 05:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:23.546 05:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:23.803 05:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 01:32:23.803 05:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 01:32:24.060 true 01:32:24.060 05:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:24.060 05:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:24.993 05:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:25.249 05:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 01:32:25.249 05:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 01:32:25.505 true 01:32:25.505 05:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:25.505 05:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:25.762 05:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:25.763 05:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 01:32:25.763 05:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 01:32:26.019 true 01:32:26.019 05:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:26.019 05:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:27.388 05:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:27.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:27.388 05:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 01:32:27.388 05:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 01:32:27.388 true 01:32:27.388 05:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:27.388 05:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:27.645 05:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:27.901 05:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 01:32:27.901 05:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 01:32:28.157 true 01:32:28.157 05:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:28.157 05:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:29.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:29.089 05:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:29.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:29.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:29.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:29.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:29.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:29.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:29.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:29.347 05:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 01:32:29.347 05:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 01:32:29.604 true 01:32:29.604 05:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:29.604 05:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:30.578 05:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:30.578 05:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 01:32:30.578 05:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 01:32:30.856 true 01:32:30.856 05:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:30.856 05:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:30.856 05:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:31.114 05:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 01:32:31.114 05:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 01:32:31.371 true 01:32:31.371 05:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:31.371 05:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:32.303 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:32.561 05:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 01:32:32.561 05:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 01:32:32.820 true 01:32:32.820 05:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:32.820 05:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:32.820 05:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:33.078 05:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 01:32:33.078 05:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 01:32:33.336 true 01:32:33.336 05:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:33.336 05:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:34.267 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:34.524 05:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 01:32:34.524 05:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 01:32:34.781 true 01:32:34.781 05:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:34.781 05:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:35.039 05:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:35.039 05:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 01:32:35.039 05:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 01:32:35.297 true 01:32:35.297 05:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:35.297 05:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:36.229 05:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:36.486 05:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 01:32:36.486 05:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 01:32:36.743 true 01:32:36.743 05:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:36.743 05:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:37.001 05:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:37.258 05:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 01:32:37.258 05:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 01:32:37.258 true 01:32:37.258 05:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:37.258 05:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:38.628 05:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:38.628 05:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 01:32:38.628 05:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 01:32:38.628 true 01:32:38.885 05:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:38.885 05:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:38.885 05:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:39.143 05:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 01:32:39.143 05:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 01:32:39.401 true 01:32:39.401 05:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:39.401 05:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:40.340 05:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:40.599 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 01:32:40.599 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 01:32:40.857 true 01:32:40.857 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:40.857 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:40.857 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:41.115 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 01:32:41.115 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 01:32:41.387 true 01:32:41.387 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:41.387 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:42.325 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:42.325 05:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:42.583 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 01:32:42.583 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 01:32:42.843 true 01:32:42.843 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:42.843 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:43.103 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:43.103 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 01:32:43.103 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 01:32:43.365 true 01:32:43.365 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:43.365 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:44.313 05:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:44.571 05:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 01:32:44.571 05:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 01:32:44.829 true 01:32:44.829 05:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:44.829 05:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:45.087 05:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:45.087 05:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 01:32:45.087 05:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 01:32:45.353 true 01:32:45.353 05:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:45.353 05:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:46.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:46.285 05:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:46.541 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:46.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:46.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:46.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:46.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:46.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:46.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:32:46.542 05:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 01:32:46.542 05:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 01:32:46.799 true 01:32:46.799 05:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:46.799 05:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:47.730 Initializing NVMe Controllers 01:32:47.730 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:32:47.730 Controller IO queue size 128, less than required. 01:32:47.730 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:32:47.730 Controller IO queue size 128, less than required. 01:32:47.730 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:32:47.730 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:32:47.730 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:32:47.730 Initialization complete. Launching workers. 01:32:47.730 ======================================================== 01:32:47.730 Latency(us) 01:32:47.730 Device Information : IOPS MiB/s Average min max 01:32:47.730 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1149.77 0.56 68953.30 2142.23 1034254.46 01:32:47.730 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17944.67 8.76 7133.00 2005.45 426079.54 01:32:47.730 ======================================================== 01:32:47.730 Total : 19094.43 9.32 10855.49 2005.45 1034254.46 01:32:47.730 01:32:47.730 05:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:47.730 05:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 01:32:47.730 05:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 01:32:47.988 true 01:32:47.988 05:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97872 01:32:47.988 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (97872) - No such process 01:32:47.988 05:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 97872 01:32:47.988 05:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:48.244 05:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:32:48.501 05:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 01:32:48.501 05:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 01:32:48.501 05:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 01:32:48.501 05:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:32:48.501 05:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 01:32:48.501 null0 01:32:48.501 05:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:32:48.501 05:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:32:48.501 05:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 01:32:48.759 null1 01:32:48.759 05:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:32:48.759 05:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:32:48.759 05:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 01:32:49.015 null2 01:32:49.015 05:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:32:49.015 05:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:32:49.015 05:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 01:32:49.272 null3 01:32:49.272 05:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:32:49.272 05:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:32:49.272 05:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 01:32:49.272 null4 01:32:49.529 05:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:32:49.529 05:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:32:49.529 05:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 01:32:49.529 null5 01:32:49.529 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:32:49.529 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:32:49.529 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 01:32:49.787 null6 01:32:49.787 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:32:49.787 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:32:49.787 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 01:32:50.074 null7 01:32:50.074 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:32:50.074 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:32:50.074 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 01:32:50.074 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:32:50.074 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 01:32:50.074 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:32:50.074 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 01:32:50.074 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:32:50.074 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:32:50.074 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.074 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:32:50.074 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:32:50.074 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:32:50.074 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:32:50.074 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:32:50.074 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 01:32:50.074 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:32:50.074 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:32:50.074 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 98894 98896 98897 98898 98900 98902 98905 98907 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.075 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.396 05:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:32:50.396 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:50.396 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.396 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:32:50.655 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:50.655 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.655 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:32:50.655 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:32:50.655 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:50.655 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:32:50.655 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:32:50.655 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:32:50.655 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:32:50.655 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:32:50.655 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:50.913 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:32:51.171 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:51.171 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:32:51.171 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:32:51.171 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:32:51.171 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:32:51.171 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:32:51.171 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:32:51.171 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:32:51.171 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:51.171 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:51.171 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:32:51.429 05:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:32:51.429 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:32:51.429 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:51.688 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:32:51.688 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:32:51.688 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:32:51.688 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:32:51.688 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:51.688 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:51.688 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:32:51.688 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:51.688 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:51.688 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:32:51.688 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:51.688 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:51.688 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:32:51.688 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:51.688 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:51.688 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:32:51.688 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:51.688 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:51.688 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:32:51.945 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:51.945 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:51.945 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:32:51.945 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:51.945 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:51.945 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:32:51.945 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:32:51.945 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:51.945 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:51.945 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:32:51.946 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:32:51.946 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:51.946 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:32:51.946 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:52.205 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:32:52.463 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:52.463 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:52.463 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:32:52.463 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:32:52.463 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:32:52.463 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:32:52.463 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:52.463 05:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:32:52.463 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:32:52.463 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:32:52.463 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:32:52.463 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:52.463 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:52.463 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:32:52.722 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:52.722 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:52.722 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:32:52.722 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:52.722 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:52.722 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:32:52.722 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:52.722 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:52.722 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:32:52.722 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:52.722 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:52.722 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:32:52.722 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:52.722 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:52.722 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:32:52.722 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:52.722 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:52.722 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:32:52.722 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:52.722 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:52.722 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:32:52.722 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:32:52.981 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:52.981 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:32:52.981 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:32:52.981 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:32:52.981 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:32:52.981 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:32:52.981 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:52.981 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:52.981 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:32:52.981 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:32:52.981 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:52.981 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:52.981 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:32:52.981 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:52.981 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:52.981 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:32:52.981 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:52.981 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:52.981 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:32:52.981 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:52.981 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:52.981 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:32:53.240 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:32:53.240 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:53.240 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:53.240 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:32:53.240 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:53.240 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:53.240 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:32:53.240 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:32:53.240 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:53.240 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:53.240 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:32:53.240 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:32:53.240 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:32:53.240 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:53.240 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:32:53.499 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:53.499 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:53.499 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:32:53.499 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:53.499 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:53.499 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:32:53.499 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:32:53.499 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:53.499 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:53.499 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:32:53.499 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:53.499 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:53.499 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:53.499 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:53.499 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:32:53.499 05:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:32:53.499 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:32:53.499 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:53.499 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:53.499 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:32:53.499 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:32:53.499 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:32:53.757 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:32:53.757 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:53.757 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:53.757 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:32:53.757 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:53.757 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:53.757 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:32:53.757 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:32:53.757 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:53.757 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:32:53.757 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:53.757 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:53.757 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:32:53.757 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:53.757 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:53.757 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:32:53.757 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:53.757 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:53.757 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:32:53.757 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:32:54.014 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:54.014 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:54.015 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:32:54.015 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:32:54.015 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:54.015 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:54.015 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:32:54.015 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:32:54.015 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:54.015 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:54.015 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:32:54.015 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:32:54.015 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:54.015 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:54.015 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:32:54.015 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:32:54.015 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:32:54.272 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:32:54.272 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:54.272 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:54.272 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:32:54.272 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:54.272 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:54.272 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:54.272 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:54.272 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:54.272 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:54.272 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:54.272 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:54.272 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:32:54.272 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:32:54.272 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:32:54.272 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:54.272 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:54.530 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:54.530 05:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:54.530 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:54.530 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:54.530 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:32:54.530 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:32:54.530 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 01:32:54.530 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 01:32:54.530 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 01:32:54.530 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 01:32:54.530 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:32:54.530 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 01:32:54.530 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 01:32:54.530 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:32:54.530 rmmod nvme_tcp 01:32:54.530 rmmod nvme_fabrics 01:32:54.530 rmmod nvme_keyring 01:32:54.530 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:32:54.530 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 01:32:54.530 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 01:32:54.530 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 97750 ']' 01:32:54.530 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 97750 01:32:54.530 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 97750 ']' 01:32:54.530 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 97750 01:32:54.530 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 01:32:54.530 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:32:54.530 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97750 01:32:54.531 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:32:54.531 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:32:54.531 killing process with pid 97750 01:32:54.531 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97750' 01:32:54.531 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 97750 01:32:54.531 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 97750 01:32:54.789 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:32:54.789 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:32:54.789 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:32:54.790 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 01:32:54.790 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 01:32:54.790 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 01:32:54.790 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:32:54.790 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:32:54.790 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:32:54.790 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:32:54.790 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:32:54.790 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:32:54.790 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:32:54.790 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:32:54.790 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:32:54.790 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:32:54.790 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:32:54.790 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:32:54.790 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:32:54.790 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:32:54.790 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:32:54.790 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 01:32:55.050 01:32:55.050 real 0m40.792s 01:32:55.050 user 2m56.634s 01:32:55.050 sys 0m13.210s 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 01:32:55.050 ************************************ 01:32:55.050 END TEST nvmf_ns_hotplug_stress 01:32:55.050 ************************************ 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:32:55.050 ************************************ 01:32:55.050 START TEST nvmf_delete_subsystem 01:32:55.050 ************************************ 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 01:32:55.050 * Looking for test storage... 01:32:55.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:32:55.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:32:55.050 --rc genhtml_branch_coverage=1 01:32:55.050 --rc genhtml_function_coverage=1 01:32:55.050 --rc genhtml_legend=1 01:32:55.050 --rc geninfo_all_blocks=1 01:32:55.050 --rc geninfo_unexecuted_blocks=1 01:32:55.050 01:32:55.050 ' 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:32:55.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:32:55.050 --rc genhtml_branch_coverage=1 01:32:55.050 --rc genhtml_function_coverage=1 01:32:55.050 --rc genhtml_legend=1 01:32:55.050 --rc geninfo_all_blocks=1 01:32:55.050 --rc geninfo_unexecuted_blocks=1 01:32:55.050 01:32:55.050 ' 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:32:55.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:32:55.050 --rc genhtml_branch_coverage=1 01:32:55.050 --rc genhtml_function_coverage=1 01:32:55.050 --rc genhtml_legend=1 01:32:55.050 --rc geninfo_all_blocks=1 01:32:55.050 --rc geninfo_unexecuted_blocks=1 01:32:55.050 01:32:55.050 ' 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:32:55.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:32:55.050 --rc genhtml_branch_coverage=1 01:32:55.050 --rc genhtml_function_coverage=1 01:32:55.050 --rc genhtml_legend=1 01:32:55.050 --rc geninfo_all_blocks=1 01:32:55.050 --rc geninfo_unexecuted_blocks=1 01:32:55.050 01:32:55.050 ' 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:32:55.050 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:32:55.051 Cannot find device "nvmf_init_br" 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 01:32:55.051 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:32:55.311 Cannot find device "nvmf_init_br2" 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:32:55.311 Cannot find device "nvmf_tgt_br" 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:32:55.311 Cannot find device "nvmf_tgt_br2" 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:32:55.311 Cannot find device "nvmf_init_br" 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:32:55.311 Cannot find device "nvmf_init_br2" 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:32:55.311 Cannot find device "nvmf_tgt_br" 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:32:55.311 Cannot find device "nvmf_tgt_br2" 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:32:55.311 Cannot find device "nvmf_br" 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:32:55.311 Cannot find device "nvmf_init_if" 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:32:55.311 Cannot find device "nvmf_init_if2" 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:32:55.311 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:32:55.311 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:32:55.311 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:32:55.311 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 01:32:55.311 01:32:55.311 --- 10.0.0.3 ping statistics --- 01:32:55.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:32:55.311 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:32:55.311 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:32:55.311 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 01:32:55.311 01:32:55.311 --- 10.0.0.4 ping statistics --- 01:32:55.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:32:55.311 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:32:55.311 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:32:55.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:32:55.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 01:32:55.571 01:32:55.571 --- 10.0.0.1 ping statistics --- 01:32:55.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:32:55.571 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:32:55.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:32:55.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 01:32:55.571 01:32:55.571 --- 10.0.0.2 ping statistics --- 01:32:55.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:32:55.571 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=100272 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 100272 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 100272 ']' 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 01:32:55.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 01:32:55.571 05:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:32:55.571 [2024-12-09 05:27:46.997617] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:32:55.571 [2024-12-09 05:27:46.998352] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:32:55.571 [2024-12-09 05:27:46.998395] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:32:55.571 [2024-12-09 05:27:47.138080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:32:55.571 [2024-12-09 05:27:47.171954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:32:55.571 [2024-12-09 05:27:47.171991] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:32:55.571 [2024-12-09 05:27:47.171997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:32:55.571 [2024-12-09 05:27:47.172003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:32:55.571 [2024-12-09 05:27:47.172007] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:32:55.571 [2024-12-09 05:27:47.172645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:32:55.571 [2024-12-09 05:27:47.172676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:32:55.829 [2024-12-09 05:27:47.225509] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:32:55.829 [2024-12-09 05:27:47.225783] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:32:55.829 [2024-12-09 05:27:47.226203] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:32:56.393 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:32:56.393 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:32:56.394 [2024-12-09 05:27:47.877407] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:32:56.394 [2024-12-09 05:27:47.897613] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:32:56.394 NULL1 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:32:56.394 Delay0 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=100323 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 01:32:56.394 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 01:32:56.651 [2024-12-09 05:27:48.080583] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:32:58.548 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:32:58.548 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:32:58.548 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 starting I/O failed: -6 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 starting I/O failed: -6 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 starting I/O failed: -6 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 starting I/O failed: -6 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 starting I/O failed: -6 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 starting I/O failed: -6 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 starting I/O failed: -6 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 starting I/O failed: -6 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 starting I/O failed: -6 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 starting I/O failed: -6 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 starting I/O failed: -6 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 [2024-12-09 05:27:50.104525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137c7e0 is same with the state(6) to be set 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 [2024-12-09 05:27:50.105260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137bc30 is same with the state(6) to be set 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 starting I/O failed: -6 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 starting I/O failed: -6 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 starting I/O failed: -6 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 starting I/O failed: -6 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 starting I/O failed: -6 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 starting I/O failed: -6 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 starting I/O failed: -6 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 starting I/O failed: -6 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 starting I/O failed: -6 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 starting I/O failed: -6 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.548 starting I/O failed: -6 01:32:58.548 [2024-12-09 05:27:50.108302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2614000c40 is same with the state(6) to be set 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Write completed with error (sct=0, sc=8) 01:32:58.548 Read completed with error (sct=0, sc=8) 01:32:58.549 Write completed with error (sct=0, sc=8) 01:32:58.549 Write completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Write completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Write completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Write completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Write completed with error (sct=0, sc=8) 01:32:58.549 Write completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Write completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Write completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Write completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Write completed with error (sct=0, sc=8) 01:32:58.549 Write completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:58.549 Read completed with error (sct=0, sc=8) 01:32:59.482 [2024-12-09 05:27:51.093195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1370aa0 is same with the state(6) to be set 01:32:59.765 Write completed with error (sct=0, sc=8) 01:32:59.765 Read completed with error (sct=0, sc=8) 01:32:59.765 Read completed with error (sct=0, sc=8) 01:32:59.765 Read completed with error (sct=0, sc=8) 01:32:59.765 Read completed with error (sct=0, sc=8) 01:32:59.765 Read completed with error (sct=0, sc=8) 01:32:59.765 Read completed with error (sct=0, sc=8) 01:32:59.765 Read completed with error (sct=0, sc=8) 01:32:59.765 Read completed with error (sct=0, sc=8) 01:32:59.765 Read completed with error (sct=0, sc=8) 01:32:59.765 Read completed with error (sct=0, sc=8) 01:32:59.765 Read completed with error (sct=0, sc=8) 01:32:59.765 Read completed with error (sct=0, sc=8) 01:32:59.765 Read completed with error (sct=0, sc=8) 01:32:59.765 Read completed with error (sct=0, sc=8) 01:32:59.765 Read completed with error (sct=0, sc=8) 01:32:59.765 Write completed with error (sct=0, sc=8) 01:32:59.765 Read completed with error (sct=0, sc=8) 01:32:59.765 Read completed with error (sct=0, sc=8) 01:32:59.765 Read completed with error (sct=0, sc=8) 01:32:59.765 Write completed with error (sct=0, sc=8) 01:32:59.765 Read completed with error (sct=0, sc=8) 01:32:59.765 Write completed with error (sct=0, sc=8) 01:32:59.765 Read completed with error (sct=0, sc=8) 01:32:59.766 [2024-12-09 05:27:51.105395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137ba50 is same with the state(6) to be set 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Write completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Write completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Write completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Write completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 [2024-12-09 05:27:51.105787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137eea0 is same with the state(6) to be set 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Write completed with error (sct=0, sc=8) 01:32:59.766 Write completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Write completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Write completed with error (sct=0, sc=8) 01:32:59.766 Write completed with error (sct=0, sc=8) 01:32:59.766 [2024-12-09 05:27:51.107523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f261400d020 is same with the state(6) to be set 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Write completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Write completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Write completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 Read completed with error (sct=0, sc=8) 01:32:59.766 [2024-12-09 05:27:51.108076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f261400d680 is same with the state(6) to be set 01:32:59.766 Initializing NVMe Controllers 01:32:59.766 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:32:59.766 Controller IO queue size 128, less than required. 01:32:59.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:32:59.766 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 01:32:59.766 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 01:32:59.766 Initialization complete. Launching workers. 01:32:59.766 ======================================================== 01:32:59.766 Latency(us) 01:32:59.766 Device Information : IOPS MiB/s Average min max 01:32:59.766 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.86 0.08 894573.42 655.66 1006227.56 01:32:59.766 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.39 0.08 917178.99 785.17 1009340.46 01:32:59.766 ======================================================== 01:32:59.766 Total : 330.25 0.16 905552.29 655.66 1009340.46 01:32:59.766 01:32:59.766 [2024-12-09 05:27:51.108824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1370aa0 (9): Bad file descriptor 01:32:59.766 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:32:59.766 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 01:32:59.766 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 100323 01:32:59.766 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 01:32:59.766 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 100323 01:33:00.045 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (100323) - No such process 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 100323 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 100323 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 100323 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:33:00.045 [2024-12-09 05:27:51.629723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=100372 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100372 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:33:00.045 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 01:33:00.303 [2024-12-09 05:27:51.794218] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:33:00.561 05:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:33:00.561 05:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100372 01:33:00.561 05:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:33:01.128 05:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:33:01.128 05:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100372 01:33:01.128 05:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:33:01.695 05:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:33:01.695 05:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100372 01:33:01.695 05:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:33:02.262 05:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:33:02.262 05:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100372 01:33:02.262 05:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:33:02.828 05:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:33:02.828 05:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100372 01:33:02.828 05:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:33:03.086 05:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:33:03.086 05:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100372 01:33:03.086 05:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:33:03.344 Initializing NVMe Controllers 01:33:03.344 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:33:03.344 Controller IO queue size 128, less than required. 01:33:03.344 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:33:03.344 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 01:33:03.344 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 01:33:03.344 Initialization complete. Launching workers. 01:33:03.344 ======================================================== 01:33:03.344 Latency(us) 01:33:03.344 Device Information : IOPS MiB/s Average min max 01:33:03.344 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002141.40 1000085.06 1007272.44 01:33:03.344 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003981.28 1000126.86 1040571.92 01:33:03.344 ======================================================== 01:33:03.344 Total : 256.00 0.12 1003061.34 1000085.06 1040571.92 01:33:03.344 01:33:03.605 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:33:03.605 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100372 01:33:03.605 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (100372) - No such process 01:33:03.605 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 100372 01:33:03.605 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 01:33:03.605 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 01:33:03.605 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 01:33:03.605 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 01:33:03.605 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:33:03.605 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 01:33:03.605 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 01:33:03.605 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:33:03.605 rmmod nvme_tcp 01:33:03.873 rmmod nvme_fabrics 01:33:03.873 rmmod nvme_keyring 01:33:03.873 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:33:03.873 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 01:33:03.873 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 01:33:03.873 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 100272 ']' 01:33:03.873 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 100272 01:33:03.873 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 100272 ']' 01:33:03.873 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 100272 01:33:03.873 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 01:33:03.874 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:33:03.874 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100272 01:33:03.874 killing process with pid 100272 01:33:03.874 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:33:03.874 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:33:03.874 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100272' 01:33:03.874 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 100272 01:33:03.874 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 100272 01:33:03.874 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:33:03.874 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:33:03.874 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:33:03.874 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 01:33:03.874 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 01:33:03.874 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:33:03.874 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 01:33:03.874 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:33:03.874 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:33:03.874 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:33:03.874 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:33:03.874 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:33:03.874 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:33:04.132 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:33:04.132 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:33:04.132 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:33:04.132 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:33:04.132 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:33:04.132 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:33:04.132 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:33:04.132 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:33:04.132 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:33:04.132 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 01:33:04.132 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:33:04.132 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:33:04.132 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:33:04.132 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 01:33:04.132 01:33:04.132 real 0m9.163s 01:33:04.132 user 0m24.249s 01:33:04.132 sys 0m1.920s 01:33:04.132 ************************************ 01:33:04.132 END TEST nvmf_delete_subsystem 01:33:04.132 ************************************ 01:33:04.132 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 01:33:04.132 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:33:04.132 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 01:33:04.132 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:33:04.132 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:33:04.132 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:33:04.132 ************************************ 01:33:04.132 START TEST nvmf_host_management 01:33:04.132 ************************************ 01:33:04.132 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 01:33:04.392 * Looking for test storage... 01:33:04.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:33:04.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:04.392 --rc genhtml_branch_coverage=1 01:33:04.392 --rc genhtml_function_coverage=1 01:33:04.392 --rc genhtml_legend=1 01:33:04.392 --rc geninfo_all_blocks=1 01:33:04.392 --rc geninfo_unexecuted_blocks=1 01:33:04.392 01:33:04.392 ' 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:33:04.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:04.392 --rc genhtml_branch_coverage=1 01:33:04.392 --rc genhtml_function_coverage=1 01:33:04.392 --rc genhtml_legend=1 01:33:04.392 --rc geninfo_all_blocks=1 01:33:04.392 --rc geninfo_unexecuted_blocks=1 01:33:04.392 01:33:04.392 ' 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:33:04.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:04.392 --rc genhtml_branch_coverage=1 01:33:04.392 --rc genhtml_function_coverage=1 01:33:04.392 --rc genhtml_legend=1 01:33:04.392 --rc geninfo_all_blocks=1 01:33:04.392 --rc geninfo_unexecuted_blocks=1 01:33:04.392 01:33:04.392 ' 01:33:04.392 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:33:04.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:04.392 --rc genhtml_branch_coverage=1 01:33:04.392 --rc genhtml_function_coverage=1 01:33:04.392 --rc genhtml_legend=1 01:33:04.392 --rc geninfo_all_blocks=1 01:33:04.392 --rc geninfo_unexecuted_blocks=1 01:33:04.392 01:33:04.392 ' 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:33:04.393 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:33:04.393 Cannot find device "nvmf_init_br" 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:33:04.394 Cannot find device "nvmf_init_br2" 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:33:04.394 Cannot find device "nvmf_tgt_br" 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # true 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:33:04.394 Cannot find device "nvmf_tgt_br2" 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # true 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:33:04.394 Cannot find device "nvmf_init_br" 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:33:04.394 Cannot find device "nvmf_init_br2" 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:33:04.394 Cannot find device "nvmf_tgt_br" 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # true 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:33:04.394 Cannot find device "nvmf_tgt_br2" 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # true 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:33:04.394 Cannot find device "nvmf_br" 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # true 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:33:04.394 Cannot find device "nvmf_init_if" 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # true 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:33:04.394 Cannot find device "nvmf_init_if2" 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # true 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:33:04.394 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # true 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:33:04.394 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # true 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:33:04.394 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:33:04.394 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:33:04.652 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:33:04.652 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:33:04.652 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:33:04.653 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:33:04.653 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 01:33:04.653 01:33:04.653 --- 10.0.0.3 ping statistics --- 01:33:04.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:33:04.653 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:33:04.653 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:33:04.653 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 01:33:04.653 01:33:04.653 --- 10.0.0.4 ping statistics --- 01:33:04.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:33:04.653 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:33:04.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:33:04.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 01:33:04.653 01:33:04.653 --- 10.0.0.1 ping statistics --- 01:33:04.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:33:04.653 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:33:04.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:33:04.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 01:33:04.653 01:33:04.653 --- 10.0.0.2 ping statistics --- 01:33:04.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:33:04.653 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=100659 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 100659 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 100659 ']' 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 01:33:04.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 01:33:04.653 05:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:33:04.653 [2024-12-09 05:27:56.198756] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:33:04.653 [2024-12-09 05:27:56.199519] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:33:04.653 [2024-12-09 05:27:56.199573] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:33:04.911 [2024-12-09 05:27:56.341771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:33:04.911 [2024-12-09 05:27:56.378157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:33:04.911 [2024-12-09 05:27:56.378196] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:33:04.911 [2024-12-09 05:27:56.378203] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:33:04.911 [2024-12-09 05:27:56.378208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:33:04.911 [2024-12-09 05:27:56.378212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:33:04.911 [2024-12-09 05:27:56.379013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:33:04.911 [2024-12-09 05:27:56.379304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:33:04.911 [2024-12-09 05:27:56.379648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:33:04.911 [2024-12-09 05:27:56.379811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:33:04.911 [2024-12-09 05:27:56.435304] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:33:04.911 [2024-12-09 05:27:56.435583] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:33:04.911 [2024-12-09 05:27:56.436157] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:33:04.911 [2024-12-09 05:27:56.436223] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:33:04.911 [2024-12-09 05:27:56.437501] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:33:05.477 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:33:05.477 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 01:33:05.477 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:33:05.477 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 01:33:05.477 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:33:05.477 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:33:05.477 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:33:05.477 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:05.477 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:33:05.477 [2024-12-09 05:27:57.077226] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:33:05.735 Malloc0 01:33:05.735 [2024-12-09 05:27:57.149176] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=100727 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 100727 /var/tmp/bdevperf.sock 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 100727 ']' 01:33:05.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:33:05.735 { 01:33:05.735 "params": { 01:33:05.735 "name": "Nvme$subsystem", 01:33:05.735 "trtype": "$TEST_TRANSPORT", 01:33:05.735 "traddr": "$NVMF_FIRST_TARGET_IP", 01:33:05.735 "adrfam": "ipv4", 01:33:05.735 "trsvcid": "$NVMF_PORT", 01:33:05.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:33:05.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:33:05.735 "hdgst": ${hdgst:-false}, 01:33:05.735 "ddgst": ${ddgst:-false} 01:33:05.735 }, 01:33:05.735 "method": "bdev_nvme_attach_controller" 01:33:05.735 } 01:33:05.735 EOF 01:33:05.735 )") 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 01:33:05.735 05:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:33:05.735 "params": { 01:33:05.735 "name": "Nvme0", 01:33:05.735 "trtype": "tcp", 01:33:05.735 "traddr": "10.0.0.3", 01:33:05.735 "adrfam": "ipv4", 01:33:05.735 "trsvcid": "4420", 01:33:05.735 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:33:05.735 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:33:05.735 "hdgst": false, 01:33:05.735 "ddgst": false 01:33:05.735 }, 01:33:05.735 "method": "bdev_nvme_attach_controller" 01:33:05.736 }' 01:33:05.736 [2024-12-09 05:27:57.222594] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:33:05.736 [2024-12-09 05:27:57.222646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100727 ] 01:33:05.993 [2024-12-09 05:27:57.364731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:33:05.993 [2024-12-09 05:27:57.399186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:33:05.993 Running I/O for 10 seconds... 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1194 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1194 -ge 100 ']' 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:06.561 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:33:06.561 [2024-12-09 05:27:58.113017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.561 [2024-12-09 05:27:58.113223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.562 [2024-12-09 05:27:58.113227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.562 [2024-12-09 05:27:58.113231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.562 [2024-12-09 05:27:58.113234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.562 [2024-12-09 05:27:58.113238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.562 [2024-12-09 05:27:58.113242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.562 [2024-12-09 05:27:58.113246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.562 [2024-12-09 05:27:58.113249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.562 [2024-12-09 05:27:58.113253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.562 [2024-12-09 05:27:58.113257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.562 [2024-12-09 05:27:58.113260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.562 [2024-12-09 05:27:58.113265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.562 [2024-12-09 05:27:58.113269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.562 [2024-12-09 05:27:58.113272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.562 [2024-12-09 05:27:58.113276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.562 [2024-12-09 05:27:58.113280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.562 [2024-12-09 05:27:58.113284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.562 [2024-12-09 05:27:58.113287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.562 [2024-12-09 05:27:58.113291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24879f0 is same with the state(6) to be set 01:33:06.562 [2024-12-09 05:27:58.113404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.562 [2024-12-09 05:27:58.113836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.562 [2024-12-09 05:27:58.113841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.113849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.113854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.113862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.113868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.113875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.113881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.113888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.113894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.113901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.113907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.113914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.113919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.113927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.113932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.113940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.113945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.113953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.113958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.113966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.113971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.113979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.113984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.113991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.113997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.114005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.114011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.114019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.114024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.114031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.114037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.114044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.114049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.114057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.114062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.114070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.114075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.114083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.114088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.114096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.114101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.114109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.114114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.114122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.114127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.114134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.114140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.114147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.114153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.114160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.114165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.114173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.114178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.114186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.114191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.114199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.114204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.114212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.114217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.114225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.114231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.114238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.114243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.114251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.114257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.114264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:33:06.563 [2024-12-09 05:27:58.114270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.114277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ba1d0 is same with the state(6) to be set 01:33:06.563 [2024-12-09 05:27:58.115450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:33:06.563 task offset: 24576 on job bdev=Nvme0n1 fails 01:33:06.563 01:33:06.563 Latency(us) 01:33:06.563 [2024-12-09T05:27:58.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:33:06.563 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 01:33:06.563 Job: Nvme0n1 ended in about 0.58 seconds with error 01:33:06.563 Verification LBA range: start 0x0 length 0x400 01:33:06.563 Nvme0n1 : 0.58 2096.27 131.02 110.33 0.00 28275.85 3478.45 26617.70 01:33:06.563 [2024-12-09T05:27:58.180Z] =================================================================================================================== 01:33:06.563 [2024-12-09T05:27:58.180Z] Total : 2096.27 131.02 110.33 0.00 28275.85 3478.45 26617.70 01:33:06.563 [2024-12-09 05:27:58.117308] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:33:06.563 [2024-12-09 05:27:58.117327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ba660 (9): Bad file descriptor 01:33:06.563 [2024-12-09 05:27:58.118039] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 01:33:06.563 [2024-12-09 05:27:58.118094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 01:33:06.563 [2024-12-09 05:27:58.118109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:33:06.563 [2024-12-09 05:27:58.118118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 01:33:06.564 [2024-12-09 05:27:58.118123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 01:33:06.564 [2024-12-09 05:27:58.118129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 01:33:06.564 [2024-12-09 05:27:58.118134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21ba660 01:33:06.564 [2024-12-09 05:27:58.118155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ba660 (9): Bad file descriptor 01:33:06.564 [2024-12-09 05:27:58.118165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:33:06.564 [2024-12-09 05:27:58.118171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:33:06.564 [2024-12-09 05:27:58.118178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:33:06.564 [2024-12-09 05:27:58.118184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:33:06.564 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:06.564 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 01:33:06.564 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:33:06.564 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:33:06.564 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:33:06.564 05:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 01:33:07.937 05:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 100727 01:33:07.937 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (100727) - No such process 01:33:07.937 05:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 01:33:07.937 05:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 01:33:07.937 05:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 01:33:07.937 05:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 01:33:07.937 05:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 01:33:07.937 05:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 01:33:07.937 05:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:33:07.937 05:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:33:07.937 { 01:33:07.937 "params": { 01:33:07.937 "name": "Nvme$subsystem", 01:33:07.937 "trtype": "$TEST_TRANSPORT", 01:33:07.937 "traddr": "$NVMF_FIRST_TARGET_IP", 01:33:07.937 "adrfam": "ipv4", 01:33:07.937 "trsvcid": "$NVMF_PORT", 01:33:07.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:33:07.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:33:07.937 "hdgst": ${hdgst:-false}, 01:33:07.937 "ddgst": ${ddgst:-false} 01:33:07.937 }, 01:33:07.937 "method": "bdev_nvme_attach_controller" 01:33:07.937 } 01:33:07.937 EOF 01:33:07.937 )") 01:33:07.937 05:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 01:33:07.937 05:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 01:33:07.937 05:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 01:33:07.937 05:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:33:07.937 "params": { 01:33:07.937 "name": "Nvme0", 01:33:07.937 "trtype": "tcp", 01:33:07.937 "traddr": "10.0.0.3", 01:33:07.937 "adrfam": "ipv4", 01:33:07.937 "trsvcid": "4420", 01:33:07.937 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:33:07.937 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:33:07.937 "hdgst": false, 01:33:07.937 "ddgst": false 01:33:07.937 }, 01:33:07.937 "method": "bdev_nvme_attach_controller" 01:33:07.937 }' 01:33:07.937 [2024-12-09 05:27:59.170882] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:33:07.937 [2024-12-09 05:27:59.170941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100777 ] 01:33:07.937 [2024-12-09 05:27:59.310737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:33:07.937 [2024-12-09 05:27:59.344385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:33:07.937 Running I/O for 1 seconds... 01:33:09.136 2176.00 IOPS, 136.00 MiB/s 01:33:09.136 Latency(us) 01:33:09.136 [2024-12-09T05:28:00.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:33:09.136 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 01:33:09.136 Verification LBA range: start 0x0 length 0x400 01:33:09.136 Nvme0n1 : 1.01 2209.55 138.10 0.00 0.00 28418.97 3629.69 27827.59 01:33:09.136 [2024-12-09T05:28:00.753Z] =================================================================================================================== 01:33:09.136 [2024-12-09T05:28:00.753Z] Total : 2209.55 138.10 0.00 0.00 28418.97 3629.69 27827.59 01:33:09.136 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 01:33:09.136 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 01:33:09.136 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 01:33:09.136 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 01:33:09.136 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 01:33:09.136 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 01:33:09.136 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 01:33:09.136 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:33:09.136 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 01:33:09.136 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 01:33:09.136 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:33:09.136 rmmod nvme_tcp 01:33:09.136 rmmod nvme_fabrics 01:33:09.136 rmmod nvme_keyring 01:33:09.136 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:33:09.136 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 01:33:09.136 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 01:33:09.136 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 100659 ']' 01:33:09.136 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 100659 01:33:09.136 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 100659 ']' 01:33:09.136 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 100659 01:33:09.136 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 01:33:09.136 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:33:09.136 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100659 01:33:09.394 killing process with pid 100659 01:33:09.394 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:33:09.394 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:33:09.394 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100659' 01:33:09.394 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 100659 01:33:09.394 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 100659 01:33:09.394 [2024-12-09 05:28:00.886631] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 01:33:09.395 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:33:09.395 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:33:09.395 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:33:09.395 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 01:33:09.395 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 01:33:09.395 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:33:09.395 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 01:33:09.395 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:33:09.395 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:33:09.395 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:33:09.395 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:33:09.395 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:33:09.395 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:33:09.395 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:33:09.395 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:33:09.395 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:33:09.395 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:33:09.395 05:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:33:09.652 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:33:09.652 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:33:09.652 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:33:09.652 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:33:09.652 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 01:33:09.652 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:33:09.652 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:33:09.652 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:33:09.652 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 01:33:09.652 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 01:33:09.652 01:33:09.652 real 0m5.445s 01:33:09.652 user 0m16.601s 01:33:09.652 sys 0m1.822s 01:33:09.652 ************************************ 01:33:09.652 END TEST nvmf_host_management 01:33:09.652 ************************************ 01:33:09.652 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 01:33:09.652 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:33:09.652 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 01:33:09.652 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:33:09.652 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:33:09.652 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:33:09.652 ************************************ 01:33:09.652 START TEST nvmf_lvol 01:33:09.652 ************************************ 01:33:09.653 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 01:33:09.653 * Looking for test storage... 01:33:09.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:33:09.653 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:33:09.653 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 01:33:09.653 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:33:09.911 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:33:09.911 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:33:09.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:09.912 --rc genhtml_branch_coverage=1 01:33:09.912 --rc genhtml_function_coverage=1 01:33:09.912 --rc genhtml_legend=1 01:33:09.912 --rc geninfo_all_blocks=1 01:33:09.912 --rc geninfo_unexecuted_blocks=1 01:33:09.912 01:33:09.912 ' 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:33:09.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:09.912 --rc genhtml_branch_coverage=1 01:33:09.912 --rc genhtml_function_coverage=1 01:33:09.912 --rc genhtml_legend=1 01:33:09.912 --rc geninfo_all_blocks=1 01:33:09.912 --rc geninfo_unexecuted_blocks=1 01:33:09.912 01:33:09.912 ' 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:33:09.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:09.912 --rc genhtml_branch_coverage=1 01:33:09.912 --rc genhtml_function_coverage=1 01:33:09.912 --rc genhtml_legend=1 01:33:09.912 --rc geninfo_all_blocks=1 01:33:09.912 --rc geninfo_unexecuted_blocks=1 01:33:09.912 01:33:09.912 ' 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:33:09.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:09.912 --rc genhtml_branch_coverage=1 01:33:09.912 --rc genhtml_function_coverage=1 01:33:09.912 --rc genhtml_legend=1 01:33:09.912 --rc geninfo_all_blocks=1 01:33:09.912 --rc geninfo_unexecuted_blocks=1 01:33:09.912 01:33:09.912 ' 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:33:09.912 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:33:09.913 Cannot find device "nvmf_init_br" 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:33:09.913 Cannot find device "nvmf_init_br2" 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:33:09.913 Cannot find device "nvmf_tgt_br" 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # true 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:33:09.913 Cannot find device "nvmf_tgt_br2" 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # true 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:33:09.913 Cannot find device "nvmf_init_br" 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:33:09.913 Cannot find device "nvmf_init_br2" 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:33:09.913 Cannot find device "nvmf_tgt_br" 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # true 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:33:09.913 Cannot find device "nvmf_tgt_br2" 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # true 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:33:09.913 Cannot find device "nvmf_br" 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # true 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:33:09.913 Cannot find device "nvmf_init_if" 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # true 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:33:09.913 Cannot find device "nvmf_init_if2" 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # true 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:33:09.913 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # true 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:33:09.913 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # true 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:33:09.913 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:33:10.172 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:33:10.172 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 01:33:10.172 01:33:10.172 --- 10.0.0.3 ping statistics --- 01:33:10.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:33:10.172 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:33:10.172 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:33:10.172 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 01:33:10.172 01:33:10.172 --- 10.0.0.4 ping statistics --- 01:33:10.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:33:10.172 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:33:10.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:33:10.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 01:33:10.172 01:33:10.172 --- 10.0.0.1 ping statistics --- 01:33:10.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:33:10.172 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:33:10.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:33:10.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 01:33:10.172 01:33:10.172 --- 10.0.0.2 ping statistics --- 01:33:10.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:33:10.172 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=101031 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 101031 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 101031 ']' 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 01:33:10.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 01:33:10.172 05:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:33:10.172 [2024-12-09 05:28:01.678189] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:33:10.172 [2024-12-09 05:28:01.678935] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:33:10.172 [2024-12-09 05:28:01.678977] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:33:10.430 [2024-12-09 05:28:01.818627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:33:10.430 [2024-12-09 05:28:01.854072] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:33:10.430 [2024-12-09 05:28:01.854107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:33:10.430 [2024-12-09 05:28:01.854114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:33:10.430 [2024-12-09 05:28:01.854119] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:33:10.430 [2024-12-09 05:28:01.854123] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:33:10.430 [2024-12-09 05:28:01.854774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:33:10.430 [2024-12-09 05:28:01.854992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:33:10.430 [2024-12-09 05:28:01.855130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:33:10.430 [2024-12-09 05:28:01.908941] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:33:10.430 [2024-12-09 05:28:01.909734] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:33:10.430 [2024-12-09 05:28:01.910890] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:33:10.430 [2024-12-09 05:28:01.910891] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:33:10.996 05:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:33:10.996 05:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 01:33:10.996 05:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:33:10.996 05:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 01:33:10.996 05:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:33:10.996 05:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:33:10.996 05:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:33:11.254 [2024-12-09 05:28:02.779805] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:33:11.254 05:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:33:11.512 05:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 01:33:11.512 05:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:33:11.769 05:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 01:33:11.769 05:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 01:33:12.027 05:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 01:33:12.285 05:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=644ca88a-e4e3-498b-90f3-9a82052b98e1 01:33:12.285 05:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 644ca88a-e4e3-498b-90f3-9a82052b98e1 lvol 20 01:33:12.285 05:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2053d2af-1cbc-4c6d-ad6b-59e4b59d595b 01:33:12.285 05:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:33:12.542 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2053d2af-1cbc-4c6d-ad6b-59e4b59d595b 01:33:12.800 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:33:13.057 [2024-12-09 05:28:04.447684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:33:13.057 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:33:13.057 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 01:33:13.057 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=101173 01:33:13.057 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 01:33:14.480 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 2053d2af-1cbc-4c6d-ad6b-59e4b59d595b MY_SNAPSHOT 01:33:14.480 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1ec014ff-f935-4cf3-b83d-6e8fdc1d363b 01:33:14.480 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 2053d2af-1cbc-4c6d-ad6b-59e4b59d595b 30 01:33:14.736 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 1ec014ff-f935-4cf3-b83d-6e8fdc1d363b MY_CLONE 01:33:14.736 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c2f90567-819c-4a19-b285-f61023f77610 01:33:14.736 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate c2f90567-819c-4a19-b285-f61023f77610 01:33:15.300 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 101173 01:33:23.405 Initializing NVMe Controllers 01:33:23.405 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 01:33:23.405 Controller IO queue size 128, less than required. 01:33:23.405 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:33:23.405 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 01:33:23.405 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 01:33:23.405 Initialization complete. Launching workers. 01:33:23.405 ======================================================== 01:33:23.405 Latency(us) 01:33:23.405 Device Information : IOPS MiB/s Average min max 01:33:23.405 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16031.20 62.62 7986.79 4179.34 44763.65 01:33:23.405 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15916.90 62.18 8043.20 4306.61 58303.29 01:33:23.405 ======================================================== 01:33:23.405 Total : 31948.09 124.80 8014.89 4179.34 58303.29 01:33:23.405 01:33:23.405 05:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:33:23.662 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2053d2af-1cbc-4c6d-ad6b-59e4b59d595b 01:33:23.919 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 644ca88a-e4e3-498b-90f3-9a82052b98e1 01:33:23.919 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 01:33:23.919 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 01:33:23.919 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 01:33:23.919 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 01:33:23.919 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 01:33:24.177 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:33:24.177 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 01:33:24.177 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 01:33:24.177 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:33:24.177 rmmod nvme_tcp 01:33:24.177 rmmod nvme_fabrics 01:33:24.177 rmmod nvme_keyring 01:33:24.177 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:33:24.177 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 01:33:24.177 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 01:33:24.177 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 101031 ']' 01:33:24.177 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 101031 01:33:24.177 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 101031 ']' 01:33:24.177 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 101031 01:33:24.177 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 01:33:24.177 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:33:24.177 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101031 01:33:24.177 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:33:24.177 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:33:24.177 killing process with pid 101031 01:33:24.177 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101031' 01:33:24.177 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 101031 01:33:24.177 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 101031 01:33:24.435 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:33:24.435 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:33:24.435 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:33:24.435 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 01:33:24.435 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 01:33:24.435 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 01:33:24.435 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:33:24.435 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:33:24.435 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:33:24.435 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:33:24.435 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:33:24.435 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:33:24.435 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:33:24.435 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:33:24.435 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:33:24.435 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:33:24.435 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:33:24.435 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:33:24.435 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:33:24.435 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:33:24.435 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:33:24.435 05:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:33:24.435 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 01:33:24.435 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:33:24.435 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:33:24.435 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:33:24.435 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 01:33:24.435 01:33:24.435 real 0m14.843s 01:33:24.435 user 0m53.887s 01:33:24.435 sys 0m4.947s 01:33:24.435 ************************************ 01:33:24.435 END TEST nvmf_lvol 01:33:24.435 ************************************ 01:33:24.435 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 01:33:24.435 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:33:24.694 ************************************ 01:33:24.694 START TEST nvmf_lvs_grow 01:33:24.694 ************************************ 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 01:33:24.694 * Looking for test storage... 01:33:24.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:33:24.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:24.694 --rc genhtml_branch_coverage=1 01:33:24.694 --rc genhtml_function_coverage=1 01:33:24.694 --rc genhtml_legend=1 01:33:24.694 --rc geninfo_all_blocks=1 01:33:24.694 --rc geninfo_unexecuted_blocks=1 01:33:24.694 01:33:24.694 ' 01:33:24.694 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:33:24.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:24.694 --rc genhtml_branch_coverage=1 01:33:24.694 --rc genhtml_function_coverage=1 01:33:24.694 --rc genhtml_legend=1 01:33:24.694 --rc geninfo_all_blocks=1 01:33:24.695 --rc geninfo_unexecuted_blocks=1 01:33:24.695 01:33:24.695 ' 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:33:24.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:24.695 --rc genhtml_branch_coverage=1 01:33:24.695 --rc genhtml_function_coverage=1 01:33:24.695 --rc genhtml_legend=1 01:33:24.695 --rc geninfo_all_blocks=1 01:33:24.695 --rc geninfo_unexecuted_blocks=1 01:33:24.695 01:33:24.695 ' 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:33:24.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:33:24.695 --rc genhtml_branch_coverage=1 01:33:24.695 --rc genhtml_function_coverage=1 01:33:24.695 --rc genhtml_legend=1 01:33:24.695 --rc geninfo_all_blocks=1 01:33:24.695 --rc geninfo_unexecuted_blocks=1 01:33:24.695 01:33:24.695 ' 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:33:24.695 Cannot find device "nvmf_init_br" 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 01:33:24.695 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:33:24.696 Cannot find device "nvmf_init_br2" 01:33:24.696 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 01:33:24.696 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:33:24.696 Cannot find device "nvmf_tgt_br" 01:33:24.696 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 01:33:24.696 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:33:24.696 Cannot find device "nvmf_tgt_br2" 01:33:24.696 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 01:33:24.696 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:33:24.696 Cannot find device "nvmf_init_br" 01:33:24.696 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 01:33:24.696 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:33:24.696 Cannot find device "nvmf_init_br2" 01:33:24.696 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 01:33:24.696 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:33:24.696 Cannot find device "nvmf_tgt_br" 01:33:24.696 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 01:33:24.696 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:33:24.696 Cannot find device "nvmf_tgt_br2" 01:33:24.696 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 01:33:24.696 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:33:24.696 Cannot find device "nvmf_br" 01:33:24.696 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 01:33:24.696 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:33:24.954 Cannot find device "nvmf_init_if" 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:33:24.954 Cannot find device "nvmf_init_if2" 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:33:24.954 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:33:24.954 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:33:24.954 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:33:24.954 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 01:33:24.954 01:33:24.954 --- 10.0.0.3 ping statistics --- 01:33:24.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:33:24.954 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:33:24.954 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:33:24.954 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.024 ms 01:33:24.954 01:33:24.954 --- 10.0.0.4 ping statistics --- 01:33:24.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:33:24.954 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:33:24.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:33:24.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.013 ms 01:33:24.954 01:33:24.954 --- 10.0.0.1 ping statistics --- 01:33:24.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:33:24.954 rtt min/avg/max/mdev = 0.013/0.013/0.013/0.000 ms 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:33:24.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:33:24.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 01:33:24.954 01:33:24.954 --- 10.0.0.2 ping statistics --- 01:33:24.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:33:24.954 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:33:24.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=101589 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 101589 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 101589 ']' 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 01:33:24.954 05:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:33:24.954 [2024-12-09 05:28:16.563449] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:33:24.954 [2024-12-09 05:28:16.564203] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:33:24.954 [2024-12-09 05:28:16.564363] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:33:25.213 [2024-12-09 05:28:16.696057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:33:25.213 [2024-12-09 05:28:16.729364] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:33:25.213 [2024-12-09 05:28:16.729401] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:33:25.213 [2024-12-09 05:28:16.729407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:33:25.213 [2024-12-09 05:28:16.729412] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:33:25.213 [2024-12-09 05:28:16.729417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:33:25.213 [2024-12-09 05:28:16.729656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:33:25.213 [2024-12-09 05:28:16.781286] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:33:25.213 [2024-12-09 05:28:16.781533] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:33:25.779 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:33:25.779 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 01:33:25.779 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:33:25.779 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 01:33:25.779 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:33:26.039 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:33:26.039 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:33:26.039 [2024-12-09 05:28:17.602390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:33:26.039 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 01:33:26.039 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:33:26.039 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 01:33:26.039 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:33:26.039 ************************************ 01:33:26.039 START TEST lvs_grow_clean 01:33:26.039 ************************************ 01:33:26.039 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 01:33:26.039 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 01:33:26.039 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 01:33:26.039 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 01:33:26.039 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 01:33:26.039 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 01:33:26.039 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 01:33:26.039 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:33:26.039 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:33:26.039 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:33:26.297 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 01:33:26.297 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 01:33:26.554 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=39567f41-1883-4ba2-9375-9e5ab8b45e57 01:33:26.554 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39567f41-1883-4ba2-9375-9e5ab8b45e57 01:33:26.554 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 01:33:26.812 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 01:33:26.812 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 01:33:26.812 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 39567f41-1883-4ba2-9375-9e5ab8b45e57 lvol 150 01:33:27.081 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2690c202-3516-4cf8-919e-79c21f824bd3 01:33:27.081 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:33:27.081 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 01:33:27.081 [2024-12-09 05:28:18.649972] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 01:33:27.081 [2024-12-09 05:28:18.650039] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 01:33:27.081 true 01:33:27.081 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 01:33:27.081 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39567f41-1883-4ba2-9375-9e5ab8b45e57 01:33:27.355 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 01:33:27.355 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:33:27.614 05:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2690c202-3516-4cf8-919e-79c21f824bd3 01:33:27.873 05:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:33:27.873 [2024-12-09 05:28:19.430418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:33:27.873 05:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:33:28.131 05:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 01:33:28.131 05:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=101740 01:33:28.131 05:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:33:28.131 05:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 101740 /var/tmp/bdevperf.sock 01:33:28.131 05:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 101740 ']' 01:33:28.131 05:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:33:28.131 05:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:33:28.131 05:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:33:28.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:33:28.131 05:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:33:28.132 05:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 01:33:28.132 [2024-12-09 05:28:19.677686] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:33:28.132 [2024-12-09 05:28:19.677744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101740 ] 01:33:28.390 [2024-12-09 05:28:19.817640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:33:28.390 [2024-12-09 05:28:19.852215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:33:28.956 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:33:28.956 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 01:33:28.956 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 01:33:29.214 Nvme0n1 01:33:29.214 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 01:33:29.473 [ 01:33:29.473 { 01:33:29.473 "aliases": [ 01:33:29.473 "2690c202-3516-4cf8-919e-79c21f824bd3" 01:33:29.473 ], 01:33:29.473 "assigned_rate_limits": { 01:33:29.473 "r_mbytes_per_sec": 0, 01:33:29.473 "rw_ios_per_sec": 0, 01:33:29.473 "rw_mbytes_per_sec": 0, 01:33:29.473 "w_mbytes_per_sec": 0 01:33:29.473 }, 01:33:29.473 "block_size": 4096, 01:33:29.473 "claimed": false, 01:33:29.473 "driver_specific": { 01:33:29.473 "mp_policy": "active_passive", 01:33:29.473 "nvme": [ 01:33:29.473 { 01:33:29.473 "ctrlr_data": { 01:33:29.473 "ana_reporting": false, 01:33:29.473 "cntlid": 1, 01:33:29.473 "firmware_revision": "25.01", 01:33:29.473 "model_number": "SPDK bdev Controller", 01:33:29.473 "multi_ctrlr": true, 01:33:29.473 "oacs": { 01:33:29.473 "firmware": 0, 01:33:29.473 "format": 0, 01:33:29.473 "ns_manage": 0, 01:33:29.473 "security": 0 01:33:29.473 }, 01:33:29.473 "serial_number": "SPDK0", 01:33:29.473 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:33:29.473 "vendor_id": "0x8086" 01:33:29.473 }, 01:33:29.473 "ns_data": { 01:33:29.473 "can_share": true, 01:33:29.473 "id": 1 01:33:29.473 }, 01:33:29.473 "trid": { 01:33:29.473 "adrfam": "IPv4", 01:33:29.473 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:33:29.473 "traddr": "10.0.0.3", 01:33:29.473 "trsvcid": "4420", 01:33:29.473 "trtype": "TCP" 01:33:29.473 }, 01:33:29.473 "vs": { 01:33:29.473 "nvme_version": "1.3" 01:33:29.473 } 01:33:29.473 } 01:33:29.473 ] 01:33:29.473 }, 01:33:29.473 "memory_domains": [ 01:33:29.473 { 01:33:29.473 "dma_device_id": "system", 01:33:29.473 "dma_device_type": 1 01:33:29.473 } 01:33:29.473 ], 01:33:29.473 "name": "Nvme0n1", 01:33:29.473 "num_blocks": 38912, 01:33:29.473 "numa_id": -1, 01:33:29.473 "product_name": "NVMe disk", 01:33:29.473 "supported_io_types": { 01:33:29.473 "abort": true, 01:33:29.473 "compare": true, 01:33:29.473 "compare_and_write": true, 01:33:29.473 "copy": true, 01:33:29.473 "flush": true, 01:33:29.473 "get_zone_info": false, 01:33:29.473 "nvme_admin": true, 01:33:29.473 "nvme_io": true, 01:33:29.473 "nvme_io_md": false, 01:33:29.473 "nvme_iov_md": false, 01:33:29.473 "read": true, 01:33:29.473 "reset": true, 01:33:29.473 "seek_data": false, 01:33:29.473 "seek_hole": false, 01:33:29.473 "unmap": true, 01:33:29.473 "write": true, 01:33:29.473 "write_zeroes": true, 01:33:29.473 "zcopy": false, 01:33:29.473 "zone_append": false, 01:33:29.473 "zone_management": false 01:33:29.473 }, 01:33:29.473 "uuid": "2690c202-3516-4cf8-919e-79c21f824bd3", 01:33:29.473 "zoned": false 01:33:29.473 } 01:33:29.473 ] 01:33:29.473 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=101788 01:33:29.473 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 01:33:29.473 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:33:29.473 Running I/O for 10 seconds... 01:33:30.850 Latency(us) 01:33:30.850 [2024-12-09T05:28:22.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:33:30.850 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:33:30.850 Nvme0n1 : 1.00 10045.00 39.24 0.00 0.00 0.00 0.00 0.00 01:33:30.850 [2024-12-09T05:28:22.467Z] =================================================================================================================== 01:33:30.850 [2024-12-09T05:28:22.467Z] Total : 10045.00 39.24 0.00 0.00 0.00 0.00 0.00 01:33:30.850 01:33:31.417 05:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 39567f41-1883-4ba2-9375-9e5ab8b45e57 01:33:31.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:33:31.675 Nvme0n1 : 2.00 11415.00 44.59 0.00 0.00 0.00 0.00 0.00 01:33:31.675 [2024-12-09T05:28:23.292Z] =================================================================================================================== 01:33:31.675 [2024-12-09T05:28:23.292Z] Total : 11415.00 44.59 0.00 0.00 0.00 0.00 0.00 01:33:31.675 01:33:31.675 true 01:33:31.675 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39567f41-1883-4ba2-9375-9e5ab8b45e57 01:33:31.675 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 01:33:31.933 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 01:33:31.933 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 01:33:31.934 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 101788 01:33:32.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:33:32.502 Nvme0n1 : 3.00 12043.67 47.05 0.00 0.00 0.00 0.00 0.00 01:33:32.502 [2024-12-09T05:28:24.119Z] =================================================================================================================== 01:33:32.502 [2024-12-09T05:28:24.119Z] Total : 12043.67 47.05 0.00 0.00 0.00 0.00 0.00 01:33:32.502 01:33:33.436 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:33:33.436 Nvme0n1 : 4.00 12284.25 47.99 0.00 0.00 0.00 0.00 0.00 01:33:33.436 [2024-12-09T05:28:25.053Z] =================================================================================================================== 01:33:33.436 [2024-12-09T05:28:25.053Z] Total : 12284.25 47.99 0.00 0.00 0.00 0.00 0.00 01:33:33.436 01:33:34.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:33:34.809 Nvme0n1 : 5.00 12407.60 48.47 0.00 0.00 0.00 0.00 0.00 01:33:34.809 [2024-12-09T05:28:26.426Z] =================================================================================================================== 01:33:34.809 [2024-12-09T05:28:26.426Z] Total : 12407.60 48.47 0.00 0.00 0.00 0.00 0.00 01:33:34.809 01:33:35.742 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:33:35.742 Nvme0n1 : 6.00 11898.17 46.48 0.00 0.00 0.00 0.00 0.00 01:33:35.742 [2024-12-09T05:28:27.359Z] =================================================================================================================== 01:33:35.742 [2024-12-09T05:28:27.359Z] Total : 11898.17 46.48 0.00 0.00 0.00 0.00 0.00 01:33:35.742 01:33:36.674 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:33:36.674 Nvme0n1 : 7.00 12011.43 46.92 0.00 0.00 0.00 0.00 0.00 01:33:36.674 [2024-12-09T05:28:28.291Z] =================================================================================================================== 01:33:36.674 [2024-12-09T05:28:28.291Z] Total : 12011.43 46.92 0.00 0.00 0.00 0.00 0.00 01:33:36.674 01:33:37.605 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:33:37.605 Nvme0n1 : 8.00 12099.88 47.27 0.00 0.00 0.00 0.00 0.00 01:33:37.605 [2024-12-09T05:28:29.222Z] =================================================================================================================== 01:33:37.605 [2024-12-09T05:28:29.222Z] Total : 12099.88 47.27 0.00 0.00 0.00 0.00 0.00 01:33:37.605 01:33:38.535 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:33:38.535 Nvme0n1 : 9.00 12162.00 47.51 0.00 0.00 0.00 0.00 0.00 01:33:38.535 [2024-12-09T05:28:30.152Z] =================================================================================================================== 01:33:38.535 [2024-12-09T05:28:30.152Z] Total : 12162.00 47.51 0.00 0.00 0.00 0.00 0.00 01:33:38.535 01:33:39.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:33:39.467 Nvme0n1 : 10.00 12208.10 47.69 0.00 0.00 0.00 0.00 0.00 01:33:39.467 [2024-12-09T05:28:31.084Z] =================================================================================================================== 01:33:39.467 [2024-12-09T05:28:31.084Z] Total : 12208.10 47.69 0.00 0.00 0.00 0.00 0.00 01:33:39.467 01:33:39.467 01:33:39.467 Latency(us) 01:33:39.467 [2024-12-09T05:28:31.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:33:39.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:33:39.467 Nvme0n1 : 10.00 12211.40 47.70 0.00 0.00 10477.66 4763.96 262950.60 01:33:39.467 [2024-12-09T05:28:31.084Z] =================================================================================================================== 01:33:39.467 [2024-12-09T05:28:31.084Z] Total : 12211.40 47.70 0.00 0.00 10477.66 4763.96 262950.60 01:33:39.467 { 01:33:39.467 "results": [ 01:33:39.467 { 01:33:39.467 "job": "Nvme0n1", 01:33:39.467 "core_mask": "0x2", 01:33:39.467 "workload": "randwrite", 01:33:39.467 "status": "finished", 01:33:39.467 "queue_depth": 128, 01:33:39.467 "io_size": 4096, 01:33:39.467 "runtime": 10.002536, 01:33:39.467 "iops": 12211.403188151486, 01:33:39.467 "mibps": 47.70079370371674, 01:33:39.467 "io_failed": 0, 01:33:39.467 "io_timeout": 0, 01:33:39.467 "avg_latency_us": 10477.661920756225, 01:33:39.467 "min_latency_us": 4763.963076923077, 01:33:39.467 "max_latency_us": 262950.5969230769 01:33:39.467 } 01:33:39.467 ], 01:33:39.467 "core_count": 1 01:33:39.467 } 01:33:39.467 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 101740 01:33:39.467 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 101740 ']' 01:33:39.467 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 101740 01:33:39.467 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 01:33:39.467 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:33:39.467 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101740 01:33:39.724 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:33:39.724 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:33:39.724 killing process with pid 101740 01:33:39.724 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101740' 01:33:39.724 Received shutdown signal, test time was about 10.000000 seconds 01:33:39.724 01:33:39.724 Latency(us) 01:33:39.724 [2024-12-09T05:28:31.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:33:39.724 [2024-12-09T05:28:31.341Z] =================================================================================================================== 01:33:39.724 [2024-12-09T05:28:31.341Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:33:39.724 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 101740 01:33:39.724 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 101740 01:33:39.724 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:33:39.986 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:33:40.246 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 01:33:40.246 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39567f41-1883-4ba2-9375-9e5ab8b45e57 01:33:40.246 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 01:33:40.246 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 01:33:40.246 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:33:40.505 [2024-12-09 05:28:31.982196] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 01:33:40.505 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39567f41-1883-4ba2-9375-9e5ab8b45e57 01:33:40.505 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 01:33:40.505 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39567f41-1883-4ba2-9375-9e5ab8b45e57 01:33:40.505 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:33:40.505 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:33:40.505 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:33:40.505 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:33:40.505 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:33:40.505 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:33:40.505 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:33:40.505 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:33:40.505 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39567f41-1883-4ba2-9375-9e5ab8b45e57 01:33:40.763 2024/12/09 05:28:32 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:39567f41-1883-4ba2-9375-9e5ab8b45e57], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 01:33:40.763 request: 01:33:40.763 { 01:33:40.763 "method": "bdev_lvol_get_lvstores", 01:33:40.763 "params": { 01:33:40.763 "uuid": "39567f41-1883-4ba2-9375-9e5ab8b45e57" 01:33:40.763 } 01:33:40.763 } 01:33:40.763 Got JSON-RPC error response 01:33:40.763 GoRPCClient: error on JSON-RPC call 01:33:40.763 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 01:33:40.763 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:33:40.763 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:33:40.763 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:33:40.763 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:33:40.763 aio_bdev 01:33:41.021 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2690c202-3516-4cf8-919e-79c21f824bd3 01:33:41.021 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=2690c202-3516-4cf8-919e-79c21f824bd3 01:33:41.021 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:33:41.021 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 01:33:41.022 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:33:41.022 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:33:41.022 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:33:41.022 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2690c202-3516-4cf8-919e-79c21f824bd3 -t 2000 01:33:41.280 [ 01:33:41.280 { 01:33:41.280 "aliases": [ 01:33:41.280 "lvs/lvol" 01:33:41.280 ], 01:33:41.280 "assigned_rate_limits": { 01:33:41.280 "r_mbytes_per_sec": 0, 01:33:41.280 "rw_ios_per_sec": 0, 01:33:41.280 "rw_mbytes_per_sec": 0, 01:33:41.280 "w_mbytes_per_sec": 0 01:33:41.280 }, 01:33:41.280 "block_size": 4096, 01:33:41.280 "claimed": false, 01:33:41.280 "driver_specific": { 01:33:41.280 "lvol": { 01:33:41.280 "base_bdev": "aio_bdev", 01:33:41.280 "clone": false, 01:33:41.280 "esnap_clone": false, 01:33:41.280 "lvol_store_uuid": "39567f41-1883-4ba2-9375-9e5ab8b45e57", 01:33:41.280 "num_allocated_clusters": 38, 01:33:41.280 "snapshot": false, 01:33:41.280 "thin_provision": false 01:33:41.280 } 01:33:41.280 }, 01:33:41.280 "name": "2690c202-3516-4cf8-919e-79c21f824bd3", 01:33:41.280 "num_blocks": 38912, 01:33:41.280 "product_name": "Logical Volume", 01:33:41.280 "supported_io_types": { 01:33:41.280 "abort": false, 01:33:41.280 "compare": false, 01:33:41.280 "compare_and_write": false, 01:33:41.280 "copy": false, 01:33:41.280 "flush": false, 01:33:41.280 "get_zone_info": false, 01:33:41.280 "nvme_admin": false, 01:33:41.280 "nvme_io": false, 01:33:41.280 "nvme_io_md": false, 01:33:41.280 "nvme_iov_md": false, 01:33:41.280 "read": true, 01:33:41.280 "reset": true, 01:33:41.280 "seek_data": true, 01:33:41.280 "seek_hole": true, 01:33:41.280 "unmap": true, 01:33:41.280 "write": true, 01:33:41.280 "write_zeroes": true, 01:33:41.280 "zcopy": false, 01:33:41.280 "zone_append": false, 01:33:41.280 "zone_management": false 01:33:41.280 }, 01:33:41.280 "uuid": "2690c202-3516-4cf8-919e-79c21f824bd3", 01:33:41.280 "zoned": false 01:33:41.281 } 01:33:41.281 ] 01:33:41.281 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 01:33:41.281 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39567f41-1883-4ba2-9375-9e5ab8b45e57 01:33:41.281 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 01:33:41.538 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 01:33:41.538 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39567f41-1883-4ba2-9375-9e5ab8b45e57 01:33:41.538 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 01:33:41.538 05:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 01:33:41.538 05:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2690c202-3516-4cf8-919e-79c21f824bd3 01:33:41.796 05:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 39567f41-1883-4ba2-9375-9e5ab8b45e57 01:33:42.055 05:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:33:42.313 05:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:33:42.572 ************************************ 01:33:42.572 END TEST lvs_grow_clean 01:33:42.572 ************************************ 01:33:42.572 01:33:42.572 real 0m16.521s 01:33:42.572 user 0m16.007s 01:33:42.572 sys 0m1.588s 01:33:42.572 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 01:33:42.572 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 01:33:42.852 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 01:33:42.852 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:33:42.852 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 01:33:42.852 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:33:42.852 ************************************ 01:33:42.852 START TEST lvs_grow_dirty 01:33:42.852 ************************************ 01:33:42.852 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 01:33:42.852 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 01:33:42.852 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 01:33:42.852 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 01:33:42.852 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 01:33:42.852 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 01:33:42.852 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 01:33:42.852 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:33:42.853 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:33:42.853 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:33:42.853 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 01:33:42.853 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 01:33:43.110 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=bf13712c-866f-4c6b-9fbe-eb9f4b7c2559 01:33:43.110 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 01:33:43.110 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf13712c-866f-4c6b-9fbe-eb9f4b7c2559 01:33:43.369 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 01:33:43.369 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 01:33:43.369 05:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bf13712c-866f-4c6b-9fbe-eb9f4b7c2559 lvol 150 01:33:43.627 05:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=55b4bf86-895c-4e12-8a1c-a69190cd9f3c 01:33:43.627 05:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:33:43.627 05:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 01:33:43.887 [2024-12-09 05:28:35.270089] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 01:33:43.887 [2024-12-09 05:28:35.270209] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 01:33:43.887 true 01:33:43.887 05:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf13712c-866f-4c6b-9fbe-eb9f4b7c2559 01:33:43.887 05:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 01:33:43.887 05:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 01:33:43.888 05:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:33:44.145 05:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 55b4bf86-895c-4e12-8a1c-a69190cd9f3c 01:33:44.403 05:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:33:44.662 [2024-12-09 05:28:36.082398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:33:44.662 05:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:33:44.920 05:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 01:33:44.920 05:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=102169 01:33:44.920 05:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:33:44.920 05:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 102169 /var/tmp/bdevperf.sock 01:33:44.920 05:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 102169 ']' 01:33:44.920 05:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:33:44.920 05:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 01:33:44.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:33:44.920 05:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:33:44.920 05:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 01:33:44.920 05:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:33:44.920 [2024-12-09 05:28:36.337111] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:33:44.920 [2024-12-09 05:28:36.337679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102169 ] 01:33:44.920 [2024-12-09 05:28:36.477489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:33:44.920 [2024-12-09 05:28:36.513321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:33:45.855 05:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:33:45.855 05:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 01:33:45.855 05:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 01:33:46.112 Nvme0n1 01:33:46.112 05:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 01:33:46.112 [ 01:33:46.112 { 01:33:46.112 "aliases": [ 01:33:46.112 "55b4bf86-895c-4e12-8a1c-a69190cd9f3c" 01:33:46.112 ], 01:33:46.112 "assigned_rate_limits": { 01:33:46.112 "r_mbytes_per_sec": 0, 01:33:46.112 "rw_ios_per_sec": 0, 01:33:46.112 "rw_mbytes_per_sec": 0, 01:33:46.112 "w_mbytes_per_sec": 0 01:33:46.112 }, 01:33:46.112 "block_size": 4096, 01:33:46.112 "claimed": false, 01:33:46.112 "driver_specific": { 01:33:46.112 "mp_policy": "active_passive", 01:33:46.112 "nvme": [ 01:33:46.112 { 01:33:46.112 "ctrlr_data": { 01:33:46.112 "ana_reporting": false, 01:33:46.112 "cntlid": 1, 01:33:46.112 "firmware_revision": "25.01", 01:33:46.112 "model_number": "SPDK bdev Controller", 01:33:46.112 "multi_ctrlr": true, 01:33:46.112 "oacs": { 01:33:46.112 "firmware": 0, 01:33:46.112 "format": 0, 01:33:46.112 "ns_manage": 0, 01:33:46.112 "security": 0 01:33:46.112 }, 01:33:46.112 "serial_number": "SPDK0", 01:33:46.112 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:33:46.112 "vendor_id": "0x8086" 01:33:46.112 }, 01:33:46.112 "ns_data": { 01:33:46.112 "can_share": true, 01:33:46.112 "id": 1 01:33:46.112 }, 01:33:46.112 "trid": { 01:33:46.112 "adrfam": "IPv4", 01:33:46.112 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:33:46.112 "traddr": "10.0.0.3", 01:33:46.112 "trsvcid": "4420", 01:33:46.112 "trtype": "TCP" 01:33:46.112 }, 01:33:46.112 "vs": { 01:33:46.112 "nvme_version": "1.3" 01:33:46.112 } 01:33:46.112 } 01:33:46.112 ] 01:33:46.112 }, 01:33:46.112 "memory_domains": [ 01:33:46.112 { 01:33:46.112 "dma_device_id": "system", 01:33:46.112 "dma_device_type": 1 01:33:46.112 } 01:33:46.112 ], 01:33:46.112 "name": "Nvme0n1", 01:33:46.112 "num_blocks": 38912, 01:33:46.112 "numa_id": -1, 01:33:46.112 "product_name": "NVMe disk", 01:33:46.112 "supported_io_types": { 01:33:46.112 "abort": true, 01:33:46.112 "compare": true, 01:33:46.112 "compare_and_write": true, 01:33:46.112 "copy": true, 01:33:46.112 "flush": true, 01:33:46.112 "get_zone_info": false, 01:33:46.112 "nvme_admin": true, 01:33:46.112 "nvme_io": true, 01:33:46.112 "nvme_io_md": false, 01:33:46.112 "nvme_iov_md": false, 01:33:46.112 "read": true, 01:33:46.112 "reset": true, 01:33:46.112 "seek_data": false, 01:33:46.112 "seek_hole": false, 01:33:46.112 "unmap": true, 01:33:46.112 "write": true, 01:33:46.112 "write_zeroes": true, 01:33:46.112 "zcopy": false, 01:33:46.112 "zone_append": false, 01:33:46.112 "zone_management": false 01:33:46.112 }, 01:33:46.112 "uuid": "55b4bf86-895c-4e12-8a1c-a69190cd9f3c", 01:33:46.112 "zoned": false 01:33:46.112 } 01:33:46.112 ] 01:33:46.113 05:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=102211 01:33:46.113 05:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 01:33:46.113 05:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:33:46.370 Running I/O for 10 seconds... 01:33:47.303 Latency(us) 01:33:47.303 [2024-12-09T05:28:38.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:33:47.303 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:33:47.303 Nvme0n1 : 1.00 11611.00 45.36 0.00 0.00 0.00 0.00 0.00 01:33:47.303 [2024-12-09T05:28:38.920Z] =================================================================================================================== 01:33:47.303 [2024-12-09T05:28:38.920Z] Total : 11611.00 45.36 0.00 0.00 0.00 0.00 0.00 01:33:47.303 01:33:48.236 05:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bf13712c-866f-4c6b-9fbe-eb9f4b7c2559 01:33:48.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:33:48.236 Nvme0n1 : 2.00 11370.50 44.42 0.00 0.00 0.00 0.00 0.00 01:33:48.236 [2024-12-09T05:28:39.853Z] =================================================================================================================== 01:33:48.236 [2024-12-09T05:28:39.853Z] Total : 11370.50 44.42 0.00 0.00 0.00 0.00 0.00 01:33:48.236 01:33:48.494 true 01:33:48.494 05:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 01:33:48.494 05:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf13712c-866f-4c6b-9fbe-eb9f4b7c2559 01:33:48.752 05:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 01:33:48.752 05:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 01:33:48.752 05:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 102211 01:33:49.318 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:33:49.318 Nvme0n1 : 3.00 10980.67 42.89 0.00 0.00 0.00 0.00 0.00 01:33:49.318 [2024-12-09T05:28:40.935Z] =================================================================================================================== 01:33:49.318 [2024-12-09T05:28:40.935Z] Total : 10980.67 42.89 0.00 0.00 0.00 0.00 0.00 01:33:49.318 01:33:50.252 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:33:50.252 Nvme0n1 : 4.00 10816.50 42.25 0.00 0.00 0.00 0.00 0.00 01:33:50.252 [2024-12-09T05:28:41.869Z] =================================================================================================================== 01:33:50.252 [2024-12-09T05:28:41.869Z] Total : 10816.50 42.25 0.00 0.00 0.00 0.00 0.00 01:33:50.252 01:33:51.193 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:33:51.193 Nvme0n1 : 5.00 11294.00 44.12 0.00 0.00 0.00 0.00 0.00 01:33:51.193 [2024-12-09T05:28:42.810Z] =================================================================================================================== 01:33:51.193 [2024-12-09T05:28:42.810Z] Total : 11294.00 44.12 0.00 0.00 0.00 0.00 0.00 01:33:51.193 01:33:52.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:33:52.568 Nvme0n1 : 6.00 11660.17 45.55 0.00 0.00 0.00 0.00 0.00 01:33:52.568 [2024-12-09T05:28:44.185Z] =================================================================================================================== 01:33:52.568 [2024-12-09T05:28:44.185Z] Total : 11660.17 45.55 0.00 0.00 0.00 0.00 0.00 01:33:52.568 01:33:53.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:33:53.500 Nvme0n1 : 7.00 11904.86 46.50 0.00 0.00 0.00 0.00 0.00 01:33:53.500 [2024-12-09T05:28:45.117Z] =================================================================================================================== 01:33:53.500 [2024-12-09T05:28:45.117Z] Total : 11904.86 46.50 0.00 0.00 0.00 0.00 0.00 01:33:53.500 01:33:54.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:33:54.433 Nvme0n1 : 8.00 11855.50 46.31 0.00 0.00 0.00 0.00 0.00 01:33:54.433 [2024-12-09T05:28:46.050Z] =================================================================================================================== 01:33:54.433 [2024-12-09T05:28:46.050Z] Total : 11855.50 46.31 0.00 0.00 0.00 0.00 0.00 01:33:54.433 01:33:55.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:33:55.366 Nvme0n1 : 9.00 11159.33 43.59 0.00 0.00 0.00 0.00 0.00 01:33:55.366 [2024-12-09T05:28:46.983Z] =================================================================================================================== 01:33:55.366 [2024-12-09T05:28:46.983Z] Total : 11159.33 43.59 0.00 0.00 0.00 0.00 0.00 01:33:55.366 01:33:56.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:33:56.326 Nvme0n1 : 10.00 11288.50 44.10 0.00 0.00 0.00 0.00 0.00 01:33:56.326 [2024-12-09T05:28:47.943Z] =================================================================================================================== 01:33:56.326 [2024-12-09T05:28:47.943Z] Total : 11288.50 44.10 0.00 0.00 0.00 0.00 0.00 01:33:56.326 01:33:56.326 01:33:56.326 Latency(us) 01:33:56.326 [2024-12-09T05:28:47.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:33:56.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:33:56.326 Nvme0n1 : 10.01 11290.57 44.10 0.00 0.00 11333.72 3705.30 629145.60 01:33:56.326 [2024-12-09T05:28:47.943Z] =================================================================================================================== 01:33:56.326 [2024-12-09T05:28:47.943Z] Total : 11290.57 44.10 0.00 0.00 11333.72 3705.30 629145.60 01:33:56.326 { 01:33:56.326 "results": [ 01:33:56.326 { 01:33:56.326 "job": "Nvme0n1", 01:33:56.326 "core_mask": "0x2", 01:33:56.326 "workload": "randwrite", 01:33:56.326 "status": "finished", 01:33:56.326 "queue_depth": 128, 01:33:56.326 "io_size": 4096, 01:33:56.326 "runtime": 10.009505, 01:33:56.326 "iops": 11290.568314816766, 01:33:56.326 "mibps": 44.103782479752994, 01:33:56.326 "io_failed": 0, 01:33:56.326 "io_timeout": 0, 01:33:56.326 "avg_latency_us": 11333.724384982259, 01:33:56.326 "min_latency_us": 3705.3046153846153, 01:33:56.326 "max_latency_us": 629145.6 01:33:56.326 } 01:33:56.326 ], 01:33:56.326 "core_count": 1 01:33:56.326 } 01:33:56.326 05:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 102169 01:33:56.326 05:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 102169 ']' 01:33:56.326 05:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 102169 01:33:56.326 05:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 01:33:56.326 05:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:33:56.326 05:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102169 01:33:56.326 killing process with pid 102169 01:33:56.326 Received shutdown signal, test time was about 10.000000 seconds 01:33:56.326 01:33:56.326 Latency(us) 01:33:56.326 [2024-12-09T05:28:47.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:33:56.326 [2024-12-09T05:28:47.944Z] =================================================================================================================== 01:33:56.327 [2024-12-09T05:28:47.944Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:33:56.327 05:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:33:56.327 05:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:33:56.327 05:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102169' 01:33:56.327 05:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 102169 01:33:56.327 05:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 102169 01:33:56.592 05:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:33:56.592 05:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:33:56.866 05:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 01:33:56.866 05:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf13712c-866f-4c6b-9fbe-eb9f4b7c2559 01:33:57.125 05:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 01:33:57.125 05:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 01:33:57.125 05:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 101589 01:33:57.125 05:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 101589 01:33:57.125 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 101589 Killed "${NVMF_APP[@]}" "$@" 01:33:57.125 05:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 01:33:57.125 05:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 01:33:57.125 05:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:33:57.125 05:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 01:33:57.125 05:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:33:57.125 05:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=102375 01:33:57.125 05:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 102375 01:33:57.125 05:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 102375 ']' 01:33:57.125 05:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:33:57.125 05:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 01:33:57.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:33:57.125 05:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 01:33:57.125 05:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:33:57.125 05:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 01:33:57.125 05:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:33:57.125 [2024-12-09 05:28:48.659572] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:33:57.125 [2024-12-09 05:28:48.660487] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:33:57.125 [2024-12-09 05:28:48.660537] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:33:57.383 [2024-12-09 05:28:48.800271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:33:57.383 [2024-12-09 05:28:48.834579] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:33:57.383 [2024-12-09 05:28:48.834634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:33:57.383 [2024-12-09 05:28:48.834639] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:33:57.383 [2024-12-09 05:28:48.834643] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:33:57.383 [2024-12-09 05:28:48.834647] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:33:57.383 [2024-12-09 05:28:48.834845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:33:57.383 [2024-12-09 05:28:48.886114] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:33:57.383 [2024-12-09 05:28:48.886320] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:33:57.948 05:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:33:57.948 05:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 01:33:57.948 05:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:33:57.948 05:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 01:33:57.948 05:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:33:58.206 05:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:33:58.206 05:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:33:58.206 [2024-12-09 05:28:49.780195] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 01:33:58.206 [2024-12-09 05:28:49.780509] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 01:33:58.206 [2024-12-09 05:28:49.780781] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 01:33:58.466 05:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 01:33:58.466 05:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 55b4bf86-895c-4e12-8a1c-a69190cd9f3c 01:33:58.466 05:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=55b4bf86-895c-4e12-8a1c-a69190cd9f3c 01:33:58.466 05:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:33:58.466 05:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 01:33:58.466 05:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:33:58.466 05:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:33:58.466 05:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:33:58.466 05:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 55b4bf86-895c-4e12-8a1c-a69190cd9f3c -t 2000 01:33:58.726 [ 01:33:58.726 { 01:33:58.726 "aliases": [ 01:33:58.726 "lvs/lvol" 01:33:58.726 ], 01:33:58.726 "assigned_rate_limits": { 01:33:58.726 "r_mbytes_per_sec": 0, 01:33:58.726 "rw_ios_per_sec": 0, 01:33:58.726 "rw_mbytes_per_sec": 0, 01:33:58.726 "w_mbytes_per_sec": 0 01:33:58.726 }, 01:33:58.726 "block_size": 4096, 01:33:58.726 "claimed": false, 01:33:58.726 "driver_specific": { 01:33:58.726 "lvol": { 01:33:58.726 "base_bdev": "aio_bdev", 01:33:58.726 "clone": false, 01:33:58.726 "esnap_clone": false, 01:33:58.726 "lvol_store_uuid": "bf13712c-866f-4c6b-9fbe-eb9f4b7c2559", 01:33:58.726 "num_allocated_clusters": 38, 01:33:58.726 "snapshot": false, 01:33:58.726 "thin_provision": false 01:33:58.726 } 01:33:58.726 }, 01:33:58.726 "name": "55b4bf86-895c-4e12-8a1c-a69190cd9f3c", 01:33:58.726 "num_blocks": 38912, 01:33:58.726 "product_name": "Logical Volume", 01:33:58.726 "supported_io_types": { 01:33:58.726 "abort": false, 01:33:58.726 "compare": false, 01:33:58.726 "compare_and_write": false, 01:33:58.726 "copy": false, 01:33:58.726 "flush": false, 01:33:58.726 "get_zone_info": false, 01:33:58.726 "nvme_admin": false, 01:33:58.726 "nvme_io": false, 01:33:58.726 "nvme_io_md": false, 01:33:58.726 "nvme_iov_md": false, 01:33:58.726 "read": true, 01:33:58.726 "reset": true, 01:33:58.726 "seek_data": true, 01:33:58.726 "seek_hole": true, 01:33:58.726 "unmap": true, 01:33:58.726 "write": true, 01:33:58.726 "write_zeroes": true, 01:33:58.726 "zcopy": false, 01:33:58.726 "zone_append": false, 01:33:58.726 "zone_management": false 01:33:58.726 }, 01:33:58.726 "uuid": "55b4bf86-895c-4e12-8a1c-a69190cd9f3c", 01:33:58.726 "zoned": false 01:33:58.726 } 01:33:58.726 ] 01:33:58.726 05:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 01:33:58.726 05:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf13712c-866f-4c6b-9fbe-eb9f4b7c2559 01:33:58.726 05:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 01:33:58.984 05:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 01:33:58.984 05:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 01:33:58.984 05:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf13712c-866f-4c6b-9fbe-eb9f4b7c2559 01:33:59.241 05:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 01:33:59.241 05:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:33:59.242 [2024-12-09 05:28:50.851368] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 01:33:59.501 05:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf13712c-866f-4c6b-9fbe-eb9f4b7c2559 01:33:59.501 05:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 01:33:59.501 05:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf13712c-866f-4c6b-9fbe-eb9f4b7c2559 01:33:59.501 05:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:33:59.501 05:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:33:59.501 05:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:33:59.501 05:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:33:59.501 05:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:33:59.501 05:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:33:59.501 05:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:33:59.501 05:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:33:59.501 05:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf13712c-866f-4c6b-9fbe-eb9f4b7c2559 01:33:59.501 2024/12/09 05:28:51 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:bf13712c-866f-4c6b-9fbe-eb9f4b7c2559], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 01:33:59.501 request: 01:33:59.501 { 01:33:59.501 "method": "bdev_lvol_get_lvstores", 01:33:59.501 "params": { 01:33:59.501 "uuid": "bf13712c-866f-4c6b-9fbe-eb9f4b7c2559" 01:33:59.501 } 01:33:59.501 } 01:33:59.501 Got JSON-RPC error response 01:33:59.501 GoRPCClient: error on JSON-RPC call 01:33:59.501 05:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 01:33:59.501 05:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:33:59.501 05:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:33:59.501 05:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:33:59.501 05:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:33:59.760 aio_bdev 01:33:59.760 05:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 55b4bf86-895c-4e12-8a1c-a69190cd9f3c 01:33:59.760 05:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=55b4bf86-895c-4e12-8a1c-a69190cd9f3c 01:33:59.760 05:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:33:59.760 05:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 01:33:59.760 05:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:33:59.760 05:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:33:59.760 05:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:34:00.020 05:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 55b4bf86-895c-4e12-8a1c-a69190cd9f3c -t 2000 01:34:00.280 [ 01:34:00.280 { 01:34:00.280 "aliases": [ 01:34:00.280 "lvs/lvol" 01:34:00.280 ], 01:34:00.280 "assigned_rate_limits": { 01:34:00.280 "r_mbytes_per_sec": 0, 01:34:00.280 "rw_ios_per_sec": 0, 01:34:00.281 "rw_mbytes_per_sec": 0, 01:34:00.281 "w_mbytes_per_sec": 0 01:34:00.281 }, 01:34:00.281 "block_size": 4096, 01:34:00.281 "claimed": false, 01:34:00.281 "driver_specific": { 01:34:00.281 "lvol": { 01:34:00.281 "base_bdev": "aio_bdev", 01:34:00.281 "clone": false, 01:34:00.281 "esnap_clone": false, 01:34:00.281 "lvol_store_uuid": "bf13712c-866f-4c6b-9fbe-eb9f4b7c2559", 01:34:00.281 "num_allocated_clusters": 38, 01:34:00.281 "snapshot": false, 01:34:00.281 "thin_provision": false 01:34:00.281 } 01:34:00.281 }, 01:34:00.281 "name": "55b4bf86-895c-4e12-8a1c-a69190cd9f3c", 01:34:00.281 "num_blocks": 38912, 01:34:00.281 "product_name": "Logical Volume", 01:34:00.281 "supported_io_types": { 01:34:00.281 "abort": false, 01:34:00.281 "compare": false, 01:34:00.281 "compare_and_write": false, 01:34:00.281 "copy": false, 01:34:00.281 "flush": false, 01:34:00.281 "get_zone_info": false, 01:34:00.281 "nvme_admin": false, 01:34:00.281 "nvme_io": false, 01:34:00.281 "nvme_io_md": false, 01:34:00.281 "nvme_iov_md": false, 01:34:00.281 "read": true, 01:34:00.281 "reset": true, 01:34:00.281 "seek_data": true, 01:34:00.281 "seek_hole": true, 01:34:00.281 "unmap": true, 01:34:00.281 "write": true, 01:34:00.281 "write_zeroes": true, 01:34:00.281 "zcopy": false, 01:34:00.281 "zone_append": false, 01:34:00.281 "zone_management": false 01:34:00.281 }, 01:34:00.281 "uuid": "55b4bf86-895c-4e12-8a1c-a69190cd9f3c", 01:34:00.281 "zoned": false 01:34:00.281 } 01:34:00.281 ] 01:34:00.281 05:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 01:34:00.281 05:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf13712c-866f-4c6b-9fbe-eb9f4b7c2559 01:34:00.281 05:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 01:34:00.541 05:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 01:34:00.541 05:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 01:34:00.541 05:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf13712c-866f-4c6b-9fbe-eb9f4b7c2559 01:34:00.801 05:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 01:34:00.801 05:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 55b4bf86-895c-4e12-8a1c-a69190cd9f3c 01:34:00.801 05:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bf13712c-866f-4c6b-9fbe-eb9f4b7c2559 01:34:01.061 05:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:34:01.321 05:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:34:01.893 ************************************ 01:34:01.893 END TEST lvs_grow_dirty 01:34:01.893 ************************************ 01:34:01.893 01:34:01.893 real 0m19.058s 01:34:01.893 user 0m27.404s 01:34:01.893 sys 0m5.507s 01:34:01.893 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 01:34:01.893 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:34:01.893 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 01:34:01.893 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 01:34:01.893 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 01:34:01.893 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 01:34:01.893 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:34:01.893 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 01:34:01.893 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 01:34:01.893 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 01:34:01.893 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:34:01.893 nvmf_trace.0 01:34:01.893 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 01:34:01.893 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 01:34:01.893 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 01:34:01.893 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 01:34:03.273 05:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:34:03.273 05:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 01:34:03.273 05:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 01:34:03.273 05:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:34:03.273 rmmod nvme_tcp 01:34:03.273 rmmod nvme_fabrics 01:34:03.274 rmmod nvme_keyring 01:34:03.274 05:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:34:03.274 05:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 01:34:03.274 05:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 01:34:03.274 05:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 102375 ']' 01:34:03.274 05:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 102375 01:34:03.274 05:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 102375 ']' 01:34:03.274 05:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 102375 01:34:03.274 05:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 01:34:03.274 05:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:34:03.274 05:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102375 01:34:03.274 05:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:34:03.274 05:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:34:03.274 killing process with pid 102375 01:34:03.274 05:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102375' 01:34:03.274 05:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 102375 01:34:03.274 05:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 102375 01:34:03.535 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:34:03.535 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:34:03.535 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:34:03.535 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 01:34:03.535 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 01:34:03.535 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 01:34:03.535 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:34:03.535 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:34:03.535 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:34:03.535 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:34:03.535 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:34:03.535 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:34:03.535 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:34:03.535 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:34:03.535 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:34:03.535 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:34:03.535 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:34:03.535 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:34:03.795 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:34:03.795 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:34:03.795 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:34:03.795 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:34:03.795 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 01:34:03.795 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:34:03.795 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:34:03.795 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:34:03.795 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 01:34:03.795 01:34:03.795 real 0m39.214s 01:34:03.795 user 0m44.498s 01:34:03.795 sys 0m8.973s 01:34:03.795 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 01:34:03.795 ************************************ 01:34:03.795 END TEST nvmf_lvs_grow 01:34:03.795 ************************************ 01:34:03.795 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:34:03.795 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 01:34:03.795 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:34:03.796 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:34:03.796 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:34:03.796 ************************************ 01:34:03.796 START TEST nvmf_bdev_io_wait 01:34:03.796 ************************************ 01:34:03.796 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 01:34:04.077 * Looking for test storage... 01:34:04.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:34:04.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:34:04.077 --rc genhtml_branch_coverage=1 01:34:04.077 --rc genhtml_function_coverage=1 01:34:04.077 --rc genhtml_legend=1 01:34:04.077 --rc geninfo_all_blocks=1 01:34:04.077 --rc geninfo_unexecuted_blocks=1 01:34:04.077 01:34:04.077 ' 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:34:04.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:34:04.077 --rc genhtml_branch_coverage=1 01:34:04.077 --rc genhtml_function_coverage=1 01:34:04.077 --rc genhtml_legend=1 01:34:04.077 --rc geninfo_all_blocks=1 01:34:04.077 --rc geninfo_unexecuted_blocks=1 01:34:04.077 01:34:04.077 ' 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:34:04.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:34:04.077 --rc genhtml_branch_coverage=1 01:34:04.077 --rc genhtml_function_coverage=1 01:34:04.077 --rc genhtml_legend=1 01:34:04.077 --rc geninfo_all_blocks=1 01:34:04.077 --rc geninfo_unexecuted_blocks=1 01:34:04.077 01:34:04.077 ' 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:34:04.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:34:04.077 --rc genhtml_branch_coverage=1 01:34:04.077 --rc genhtml_function_coverage=1 01:34:04.077 --rc genhtml_legend=1 01:34:04.077 --rc geninfo_all_blocks=1 01:34:04.077 --rc geninfo_unexecuted_blocks=1 01:34:04.077 01:34:04.077 ' 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:34:04.077 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:34:04.078 Cannot find device "nvmf_init_br" 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:34:04.078 Cannot find device "nvmf_init_br2" 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:34:04.078 Cannot find device "nvmf_tgt_br" 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:34:04.078 Cannot find device "nvmf_tgt_br2" 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:34:04.078 Cannot find device "nvmf_init_br" 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:34:04.078 Cannot find device "nvmf_init_br2" 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:34:04.078 Cannot find device "nvmf_tgt_br" 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:34:04.078 Cannot find device "nvmf_tgt_br2" 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:34:04.078 Cannot find device "nvmf_br" 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:34:04.078 Cannot find device "nvmf_init_if" 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:34:04.078 Cannot find device "nvmf_init_if2" 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:34:04.078 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:34:04.078 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:34:04.078 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:34:04.079 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:34:04.339 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:34:04.339 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 01:34:04.339 01:34:04.339 --- 10.0.0.3 ping statistics --- 01:34:04.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:34:04.339 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:34:04.339 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:34:04.339 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 01:34:04.339 01:34:04.339 --- 10.0.0.4 ping statistics --- 01:34:04.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:34:04.339 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:34:04.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:34:04.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 01:34:04.339 01:34:04.339 --- 10.0.0.1 ping statistics --- 01:34:04.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:34:04.339 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:34:04.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:34:04.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.032 ms 01:34:04.339 01:34:04.339 --- 10.0.0.2 ping statistics --- 01:34:04.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:34:04.339 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 01:34:04.339 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:34:04.340 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=102848 01:34:04.340 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 102848 01:34:04.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:34:04.340 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 102848 ']' 01:34:04.340 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:34:04.340 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 01:34:04.340 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 01:34:04.340 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:34:04.340 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 01:34:04.340 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:34:04.340 [2024-12-09 05:28:55.885805] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:34:04.340 [2024-12-09 05:28:55.886575] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:04.340 [2024-12-09 05:28:55.886614] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:34:04.599 [2024-12-09 05:28:56.026841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:34:04.599 [2024-12-09 05:28:56.064431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:34:04.599 [2024-12-09 05:28:56.064470] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:34:04.599 [2024-12-09 05:28:56.064477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:34:04.599 [2024-12-09 05:28:56.064482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:34:04.599 [2024-12-09 05:28:56.064486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:34:04.599 [2024-12-09 05:28:56.065319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:34:04.599 [2024-12-09 05:28:56.065641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:34:04.599 [2024-12-09 05:28:56.066114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:34:04.599 [2024-12-09 05:28:56.066226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:34:04.599 [2024-12-09 05:28:56.066627] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:34:05.170 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:34:05.170 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 01:34:05.170 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:34:05.170 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 01:34:05.170 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:34:05.170 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:34:05.170 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 01:34:05.170 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:05.170 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:34:05.170 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:05.170 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 01:34:05.170 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:05.170 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:34:05.432 [2024-12-09 05:28:56.835079] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:34:05.432 [2024-12-09 05:28:56.835843] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:34:05.432 [2024-12-09 05:28:56.835871] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:34:05.432 [2024-12-09 05:28:56.836193] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:34:05.433 [2024-12-09 05:28:56.846857] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:34:05.433 Malloc0 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:34:05.433 [2024-12-09 05:28:56.915058] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=102901 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=102903 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=102905 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:34:05.433 { 01:34:05.433 "params": { 01:34:05.433 "name": "Nvme$subsystem", 01:34:05.433 "trtype": "$TEST_TRANSPORT", 01:34:05.433 "traddr": "$NVMF_FIRST_TARGET_IP", 01:34:05.433 "adrfam": "ipv4", 01:34:05.433 "trsvcid": "$NVMF_PORT", 01:34:05.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:34:05.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:34:05.433 "hdgst": ${hdgst:-false}, 01:34:05.433 "ddgst": ${ddgst:-false} 01:34:05.433 }, 01:34:05.433 "method": "bdev_nvme_attach_controller" 01:34:05.433 } 01:34:05.433 EOF 01:34:05.433 )") 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=102906 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:34:05.433 { 01:34:05.433 "params": { 01:34:05.433 "name": "Nvme$subsystem", 01:34:05.433 "trtype": "$TEST_TRANSPORT", 01:34:05.433 "traddr": "$NVMF_FIRST_TARGET_IP", 01:34:05.433 "adrfam": "ipv4", 01:34:05.433 "trsvcid": "$NVMF_PORT", 01:34:05.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:34:05.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:34:05.433 "hdgst": ${hdgst:-false}, 01:34:05.433 "ddgst": ${ddgst:-false} 01:34:05.433 }, 01:34:05.433 "method": "bdev_nvme_attach_controller" 01:34:05.433 } 01:34:05.433 EOF 01:34:05.433 )") 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:34:05.433 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:34:05.433 { 01:34:05.433 "params": { 01:34:05.433 "name": "Nvme$subsystem", 01:34:05.433 "trtype": "$TEST_TRANSPORT", 01:34:05.433 "traddr": "$NVMF_FIRST_TARGET_IP", 01:34:05.433 "adrfam": "ipv4", 01:34:05.433 "trsvcid": "$NVMF_PORT", 01:34:05.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:34:05.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:34:05.433 "hdgst": ${hdgst:-false}, 01:34:05.433 "ddgst": ${ddgst:-false} 01:34:05.433 }, 01:34:05.433 "method": "bdev_nvme_attach_controller" 01:34:05.433 } 01:34:05.433 EOF 01:34:05.433 )") 01:34:05.434 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:34:05.434 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:34:05.434 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:34:05.434 "params": { 01:34:05.434 "name": "Nvme1", 01:34:05.434 "trtype": "tcp", 01:34:05.434 "traddr": "10.0.0.3", 01:34:05.434 "adrfam": "ipv4", 01:34:05.434 "trsvcid": "4420", 01:34:05.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:34:05.434 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:34:05.434 "hdgst": false, 01:34:05.434 "ddgst": false 01:34:05.434 }, 01:34:05.434 "method": "bdev_nvme_attach_controller" 01:34:05.434 }' 01:34:05.434 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:34:05.434 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:34:05.434 "params": { 01:34:05.434 "name": "Nvme1", 01:34:05.434 "trtype": "tcp", 01:34:05.434 "traddr": "10.0.0.3", 01:34:05.434 "adrfam": "ipv4", 01:34:05.434 "trsvcid": "4420", 01:34:05.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:34:05.434 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:34:05.434 "hdgst": false, 01:34:05.434 "ddgst": false 01:34:05.434 }, 01:34:05.434 "method": "bdev_nvme_attach_controller" 01:34:05.434 }' 01:34:05.434 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:34:05.434 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 01:34:05.434 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:34:05.434 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:34:05.434 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:34:05.434 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:34:05.434 { 01:34:05.434 "params": { 01:34:05.434 "name": "Nvme$subsystem", 01:34:05.434 "trtype": "$TEST_TRANSPORT", 01:34:05.434 "traddr": "$NVMF_FIRST_TARGET_IP", 01:34:05.434 "adrfam": "ipv4", 01:34:05.434 "trsvcid": "$NVMF_PORT", 01:34:05.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:34:05.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:34:05.434 "hdgst": ${hdgst:-false}, 01:34:05.434 "ddgst": ${ddgst:-false} 01:34:05.434 }, 01:34:05.434 "method": "bdev_nvme_attach_controller" 01:34:05.434 } 01:34:05.434 EOF 01:34:05.434 )") 01:34:05.434 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:34:05.434 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:34:05.434 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:34:05.434 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:34:05.434 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:34:05.434 "params": { 01:34:05.434 "name": "Nvme1", 01:34:05.434 "trtype": "tcp", 01:34:05.434 "traddr": "10.0.0.3", 01:34:05.434 "adrfam": "ipv4", 01:34:05.434 "trsvcid": "4420", 01:34:05.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:34:05.434 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:34:05.434 "hdgst": false, 01:34:05.434 "ddgst": false 01:34:05.434 }, 01:34:05.434 "method": "bdev_nvme_attach_controller" 01:34:05.434 }' 01:34:05.434 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:34:05.434 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:34:05.434 "params": { 01:34:05.434 "name": "Nvme1", 01:34:05.434 "trtype": "tcp", 01:34:05.434 "traddr": "10.0.0.3", 01:34:05.434 "adrfam": "ipv4", 01:34:05.434 "trsvcid": "4420", 01:34:05.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:34:05.434 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:34:05.434 "hdgst": false, 01:34:05.434 "ddgst": false 01:34:05.434 }, 01:34:05.434 "method": "bdev_nvme_attach_controller" 01:34:05.434 }' 01:34:05.434 [2024-12-09 05:28:56.956417] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:05.434 [2024-12-09 05:28:56.956469] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 01:34:05.434 [2024-12-09 05:28:56.957339] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:05.434 [2024-12-09 05:28:56.957392] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 01:34:05.434 [2024-12-09 05:28:56.965611] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:05.434 [2024-12-09 05:28:56.965790] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 01:34:05.434 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 102901 01:34:05.434 [2024-12-09 05:28:56.980735] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:05.434 [2024-12-09 05:28:56.980799] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 01:34:05.694 [2024-12-09 05:28:57.138489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:34:05.694 [2024-12-09 05:28:57.171851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:34:05.694 [2024-12-09 05:28:57.177825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:34:05.694 [2024-12-09 05:28:57.211768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 01:34:05.694 [2024-12-09 05:28:57.235861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:34:05.694 [2024-12-09 05:28:57.269791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 01:34:05.694 [2024-12-09 05:28:57.289392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:34:05.694 Running I/O for 1 seconds... 01:34:05.954 Running I/O for 1 seconds... 01:34:05.954 [2024-12-09 05:28:57.331413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 01:34:05.954 Running I/O for 1 seconds... 01:34:05.954 Running I/O for 1 seconds... 01:34:06.915 8182.00 IOPS, 31.96 MiB/s 01:34:06.915 Latency(us) 01:34:06.915 [2024-12-09T05:28:58.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:34:06.915 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 01:34:06.915 Nvme1n1 : 1.02 8238.86 32.18 0.00 0.00 15436.42 4486.70 25407.80 01:34:06.915 [2024-12-09T05:28:58.532Z] =================================================================================================================== 01:34:06.915 [2024-12-09T05:28:58.532Z] Total : 8238.86 32.18 0.00 0.00 15436.42 4486.70 25407.80 01:34:06.915 11164.00 IOPS, 43.61 MiB/s 01:34:06.915 Latency(us) 01:34:06.915 [2024-12-09T05:28:58.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:34:06.915 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 01:34:06.915 Nvme1n1 : 1.01 11214.58 43.81 0.00 0.00 11370.92 5646.18 19156.68 01:34:06.915 [2024-12-09T05:28:58.532Z] =================================================================================================================== 01:34:06.915 [2024-12-09T05:28:58.532Z] Total : 11214.58 43.81 0.00 0.00 11370.92 5646.18 19156.68 01:34:06.915 192024.00 IOPS, 750.09 MiB/s 01:34:06.915 Latency(us) 01:34:06.915 [2024-12-09T05:28:58.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:34:06.915 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 01:34:06.915 Nvme1n1 : 1.00 191671.66 748.72 0.00 0.00 664.18 278.84 1827.45 01:34:06.915 [2024-12-09T05:28:58.532Z] =================================================================================================================== 01:34:06.915 [2024-12-09T05:28:58.532Z] Total : 191671.66 748.72 0.00 0.00 664.18 278.84 1827.45 01:34:06.915 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 102903 01:34:06.915 9059.00 IOPS, 35.39 MiB/s [2024-12-09T05:28:58.532Z] 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 102905 01:34:06.915 01:34:06.915 Latency(us) 01:34:06.915 [2024-12-09T05:28:58.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:34:06.915 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 01:34:06.915 Nvme1n1 : 1.01 9214.05 35.99 0.00 0.00 13861.17 4108.60 33877.07 01:34:06.915 [2024-12-09T05:28:58.532Z] =================================================================================================================== 01:34:06.915 [2024-12-09T05:28:58.532Z] Total : 9214.05 35.99 0.00 0.00 13861.17 4108.60 33877.07 01:34:07.172 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 102906 01:34:07.172 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:34:07.172 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:07.172 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:34:07.172 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:07.172 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 01:34:07.172 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 01:34:07.172 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 01:34:07.172 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 01:34:07.172 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:34:07.172 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 01:34:07.172 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 01:34:07.173 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:34:07.173 rmmod nvme_tcp 01:34:07.173 rmmod nvme_fabrics 01:34:07.173 rmmod nvme_keyring 01:34:07.173 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:34:07.173 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 01:34:07.173 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 01:34:07.173 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 102848 ']' 01:34:07.173 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 102848 01:34:07.173 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 102848 ']' 01:34:07.173 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 102848 01:34:07.173 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 01:34:07.173 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:34:07.173 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102848 01:34:07.173 killing process with pid 102848 01:34:07.173 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:34:07.173 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:34:07.173 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102848' 01:34:07.173 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 102848 01:34:07.173 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 102848 01:34:07.450 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:34:07.450 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:34:07.450 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:34:07.450 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 01:34:07.450 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 01:34:07.450 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 01:34:07.450 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:34:07.450 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:34:07.450 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:34:07.450 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:34:07.450 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:34:07.450 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:34:07.450 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:34:07.450 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:34:07.450 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:34:07.450 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:34:07.450 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:34:07.450 05:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:34:07.450 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:34:07.450 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:34:07.450 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 01:34:07.707 ************************************ 01:34:07.707 01:34:07.707 real 0m3.786s 01:34:07.707 user 0m12.139s 01:34:07.707 sys 0m1.816s 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:34:07.707 END TEST nvmf_bdev_io_wait 01:34:07.707 ************************************ 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:34:07.707 ************************************ 01:34:07.707 START TEST nvmf_queue_depth 01:34:07.707 ************************************ 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 01:34:07.707 * Looking for test storage... 01:34:07.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 01:34:07.707 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 01:34:07.708 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:34:07.708 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 01:34:07.708 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 01:34:07.708 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:34:07.708 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 01:34:07.708 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 01:34:07.708 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:34:07.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:34:07.966 --rc genhtml_branch_coverage=1 01:34:07.966 --rc genhtml_function_coverage=1 01:34:07.966 --rc genhtml_legend=1 01:34:07.966 --rc geninfo_all_blocks=1 01:34:07.966 --rc geninfo_unexecuted_blocks=1 01:34:07.966 01:34:07.966 ' 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:34:07.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:34:07.966 --rc genhtml_branch_coverage=1 01:34:07.966 --rc genhtml_function_coverage=1 01:34:07.966 --rc genhtml_legend=1 01:34:07.966 --rc geninfo_all_blocks=1 01:34:07.966 --rc geninfo_unexecuted_blocks=1 01:34:07.966 01:34:07.966 ' 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:34:07.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:34:07.966 --rc genhtml_branch_coverage=1 01:34:07.966 --rc genhtml_function_coverage=1 01:34:07.966 --rc genhtml_legend=1 01:34:07.966 --rc geninfo_all_blocks=1 01:34:07.966 --rc geninfo_unexecuted_blocks=1 01:34:07.966 01:34:07.966 ' 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:34:07.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:34:07.966 --rc genhtml_branch_coverage=1 01:34:07.966 --rc genhtml_function_coverage=1 01:34:07.966 --rc genhtml_legend=1 01:34:07.966 --rc geninfo_all_blocks=1 01:34:07.966 --rc geninfo_unexecuted_blocks=1 01:34:07.966 01:34:07.966 ' 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:34:07.966 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:34:07.967 Cannot find device "nvmf_init_br" 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:34:07.967 Cannot find device "nvmf_init_br2" 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:34:07.967 Cannot find device "nvmf_tgt_br" 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:34:07.967 Cannot find device "nvmf_tgt_br2" 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:34:07.967 Cannot find device "nvmf_init_br" 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:34:07.967 Cannot find device "nvmf_init_br2" 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:34:07.967 Cannot find device "nvmf_tgt_br" 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:34:07.967 Cannot find device "nvmf_tgt_br2" 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:34:07.967 Cannot find device "nvmf_br" 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:34:07.967 Cannot find device "nvmf_init_if" 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:34:07.967 Cannot find device "nvmf_init_if2" 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:34:07.967 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:34:07.967 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 01:34:07.968 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:34:07.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:34:07.968 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 01:34:07.968 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:34:07.968 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:34:07.968 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:34:07.968 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:34:07.968 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:34:07.968 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:34:07.968 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:34:07.968 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:34:07.968 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:34:07.968 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:34:07.968 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:34:07.968 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:34:07.968 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:34:07.968 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:34:07.968 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:34:07.968 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:34:08.226 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:34:08.226 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 01:34:08.226 01:34:08.226 --- 10.0.0.3 ping statistics --- 01:34:08.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:34:08.226 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:34:08.226 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:34:08.226 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 01:34:08.226 01:34:08.226 --- 10.0.0.4 ping statistics --- 01:34:08.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:34:08.226 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:34:08.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:34:08.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 01:34:08.226 01:34:08.226 --- 10.0.0.1 ping statistics --- 01:34:08.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:34:08.226 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:34:08.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:34:08.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 01:34:08.226 01:34:08.226 --- 10.0.0.2 ping statistics --- 01:34:08.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:34:08.226 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:34:08.226 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:34:08.227 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 01:34:08.227 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:34:08.227 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 01:34:08.227 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:34:08.227 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=103164 01:34:08.227 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 103164 01:34:08.227 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 103164 ']' 01:34:08.227 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 01:34:08.227 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:34:08.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:34:08.227 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 01:34:08.227 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:34:08.227 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 01:34:08.227 05:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:34:08.227 [2024-12-09 05:28:59.734627] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:34:08.227 [2024-12-09 05:28:59.735370] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:08.227 [2024-12-09 05:28:59.735409] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:34:08.485 [2024-12-09 05:28:59.875604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:34:08.485 [2024-12-09 05:28:59.909369] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:34:08.485 [2024-12-09 05:28:59.909517] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:34:08.485 [2024-12-09 05:28:59.909529] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:34:08.485 [2024-12-09 05:28:59.909534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:34:08.485 [2024-12-09 05:28:59.909538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:34:08.485 [2024-12-09 05:28:59.909778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:34:08.485 [2024-12-09 05:28:59.961397] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:34:08.485 [2024-12-09 05:28:59.961658] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:34:09.052 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:34:09.052 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 01:34:09.052 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:34:09.052 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 01:34:09.052 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:34:09.052 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:34:09.052 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:34:09.052 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:09.052 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:34:09.052 [2024-12-09 05:29:00.662379] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:34:09.052 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:09.052 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:34:09.052 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:09.052 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:34:09.311 Malloc0 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:34:09.311 [2024-12-09 05:29:00.718238] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=103214 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 103214 /var/tmp/bdevperf.sock 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 103214 ']' 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:34:09.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:34:09.311 05:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 01:34:09.311 [2024-12-09 05:29:00.762772] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:09.311 [2024-12-09 05:29:00.762830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103214 ] 01:34:09.311 [2024-12-09 05:29:00.900465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:34:09.570 [2024-12-09 05:29:00.936758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:34:10.134 05:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:34:10.134 05:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 01:34:10.134 05:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:34:10.134 05:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:10.134 05:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:34:10.134 NVMe0n1 01:34:10.134 05:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:10.134 05:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:34:10.392 Running I/O for 10 seconds... 01:34:12.261 8783.00 IOPS, 34.31 MiB/s [2024-12-09T05:29:04.810Z] 9217.50 IOPS, 36.01 MiB/s [2024-12-09T05:29:06.184Z] 9562.00 IOPS, 37.35 MiB/s [2024-12-09T05:29:07.117Z] 10255.50 IOPS, 40.06 MiB/s [2024-12-09T05:29:08.050Z] 10747.20 IOPS, 41.98 MiB/s [2024-12-09T05:29:08.982Z] 11118.67 IOPS, 43.43 MiB/s [2024-12-09T05:29:09.915Z] 11411.86 IOPS, 44.58 MiB/s [2024-12-09T05:29:10.847Z] 11596.00 IOPS, 45.30 MiB/s [2024-12-09T05:29:11.792Z] 11763.89 IOPS, 45.95 MiB/s [2024-12-09T05:29:12.065Z] 11896.70 IOPS, 46.47 MiB/s 01:34:20.448 Latency(us) 01:34:20.448 [2024-12-09T05:29:12.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:34:20.448 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 01:34:20.448 Verification LBA range: start 0x0 length 0x4000 01:34:20.448 NVMe0n1 : 10.06 11924.48 46.58 0.00 0.00 85571.18 22887.19 71787.13 01:34:20.448 [2024-12-09T05:29:12.065Z] =================================================================================================================== 01:34:20.448 [2024-12-09T05:29:12.065Z] Total : 11924.48 46.58 0.00 0.00 85571.18 22887.19 71787.13 01:34:20.448 { 01:34:20.448 "results": [ 01:34:20.448 { 01:34:20.448 "job": "NVMe0n1", 01:34:20.448 "core_mask": "0x1", 01:34:20.448 "workload": "verify", 01:34:20.448 "status": "finished", 01:34:20.448 "verify_range": { 01:34:20.448 "start": 0, 01:34:20.448 "length": 16384 01:34:20.448 }, 01:34:20.448 "queue_depth": 1024, 01:34:20.448 "io_size": 4096, 01:34:20.448 "runtime": 10.06258, 01:34:20.448 "iops": 11924.476625279003, 01:34:20.448 "mibps": 46.579986817496106, 01:34:20.448 "io_failed": 0, 01:34:20.448 "io_timeout": 0, 01:34:20.448 "avg_latency_us": 85571.17937281194, 01:34:20.448 "min_latency_us": 22887.187692307692, 01:34:20.448 "max_latency_us": 71787.12615384615 01:34:20.448 } 01:34:20.448 ], 01:34:20.448 "core_count": 1 01:34:20.448 } 01:34:20.448 05:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 103214 01:34:20.448 05:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 103214 ']' 01:34:20.448 05:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 103214 01:34:20.448 05:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 01:34:20.448 05:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:34:20.448 05:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103214 01:34:20.448 killing process with pid 103214 01:34:20.448 Received shutdown signal, test time was about 10.000000 seconds 01:34:20.448 01:34:20.448 Latency(us) 01:34:20.448 [2024-12-09T05:29:12.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:34:20.448 [2024-12-09T05:29:12.065Z] =================================================================================================================== 01:34:20.448 [2024-12-09T05:29:12.065Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:34:20.448 05:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:34:20.448 05:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:34:20.448 05:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103214' 01:34:20.448 05:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 103214 01:34:20.448 05:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 103214 01:34:20.448 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 01:34:20.448 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 01:34:20.448 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 01:34:20.448 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 01:34:20.448 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:34:20.448 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 01:34:20.448 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 01:34:20.448 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:34:20.706 rmmod nvme_tcp 01:34:20.706 rmmod nvme_fabrics 01:34:20.706 rmmod nvme_keyring 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 103164 ']' 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 103164 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 103164 ']' 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 103164 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103164 01:34:20.706 killing process with pid 103164 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103164' 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 103164 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 103164 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:34:20.706 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:34:20.963 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:34:20.963 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:34:20.963 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:34:20.963 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:34:20.963 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:34:20.963 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:34:20.963 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:34:20.963 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:34:20.963 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:34:20.963 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:34:20.963 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 01:34:20.963 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:34:20.963 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:34:20.963 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:34:20.963 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 01:34:20.963 01:34:20.963 real 0m13.332s 01:34:20.963 user 0m22.197s 01:34:20.963 sys 0m1.843s 01:34:20.963 ************************************ 01:34:20.963 END TEST nvmf_queue_depth 01:34:20.964 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 01:34:20.964 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:34:20.964 ************************************ 01:34:20.964 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 01:34:20.964 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:34:20.964 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:34:20.964 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:34:20.964 ************************************ 01:34:20.964 START TEST nvmf_target_multipath 01:34:20.964 ************************************ 01:34:20.964 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 01:34:21.223 * Looking for test storage... 01:34:21.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:34:21.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:34:21.223 --rc genhtml_branch_coverage=1 01:34:21.223 --rc genhtml_function_coverage=1 01:34:21.223 --rc genhtml_legend=1 01:34:21.223 --rc geninfo_all_blocks=1 01:34:21.223 --rc geninfo_unexecuted_blocks=1 01:34:21.223 01:34:21.223 ' 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:34:21.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:34:21.223 --rc genhtml_branch_coverage=1 01:34:21.223 --rc genhtml_function_coverage=1 01:34:21.223 --rc genhtml_legend=1 01:34:21.223 --rc geninfo_all_blocks=1 01:34:21.223 --rc geninfo_unexecuted_blocks=1 01:34:21.223 01:34:21.223 ' 01:34:21.223 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:34:21.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:34:21.223 --rc genhtml_branch_coverage=1 01:34:21.223 --rc genhtml_function_coverage=1 01:34:21.223 --rc genhtml_legend=1 01:34:21.223 --rc geninfo_all_blocks=1 01:34:21.223 --rc geninfo_unexecuted_blocks=1 01:34:21.223 01:34:21.224 ' 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:34:21.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:34:21.224 --rc genhtml_branch_coverage=1 01:34:21.224 --rc genhtml_function_coverage=1 01:34:21.224 --rc genhtml_legend=1 01:34:21.224 --rc geninfo_all_blocks=1 01:34:21.224 --rc geninfo_unexecuted_blocks=1 01:34:21.224 01:34:21.224 ' 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:34:21.224 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:34:21.225 Cannot find device "nvmf_init_br" 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:34:21.225 Cannot find device "nvmf_init_br2" 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:34:21.225 Cannot find device "nvmf_tgt_br" 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:34:21.225 Cannot find device "nvmf_tgt_br2" 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:34:21.225 Cannot find device "nvmf_init_br" 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:34:21.225 Cannot find device "nvmf_init_br2" 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:34:21.225 Cannot find device "nvmf_tgt_br" 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:34:21.225 Cannot find device "nvmf_tgt_br2" 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:34:21.225 Cannot find device "nvmf_br" 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:34:21.225 Cannot find device "nvmf_init_if" 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:34:21.225 Cannot find device "nvmf_init_if2" 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:34:21.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:34:21.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 01:34:21.225 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:34:21.484 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:34:21.484 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 01:34:21.484 01:34:21.484 --- 10.0.0.3 ping statistics --- 01:34:21.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:34:21.484 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:34:21.484 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:34:21.484 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 01:34:21.484 01:34:21.484 --- 10.0.0.4 ping statistics --- 01:34:21.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:34:21.484 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:34:21.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:34:21.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 01:34:21.484 01:34:21.484 --- 10.0.0.1 ping statistics --- 01:34:21.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:34:21.484 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:34:21.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:34:21.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 01:34:21.484 01:34:21.484 --- 10.0.0.2 ping statistics --- 01:34:21.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:34:21.484 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:34:21.484 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:34:21.485 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:34:21.485 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:34:21.485 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:34:21.485 05:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:34:21.485 05:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 01:34:21.485 05:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 01:34:21.485 05:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 01:34:21.485 05:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:34:21.485 05:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 01:34:21.485 05:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:34:21.485 05:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=103590 01:34:21.485 05:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 103590 01:34:21.485 05:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 01:34:21.485 05:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 103590 ']' 01:34:21.485 05:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:34:21.485 05:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 01:34:21.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:34:21.485 05:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:34:21.485 05:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 01:34:21.485 05:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:34:21.485 [2024-12-09 05:29:13.047527] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:34:21.485 [2024-12-09 05:29:13.048113] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:21.485 [2024-12-09 05:29:13.048148] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:34:21.743 [2024-12-09 05:29:13.189194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:34:21.743 [2024-12-09 05:29:13.226441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:34:21.743 [2024-12-09 05:29:13.226481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:34:21.743 [2024-12-09 05:29:13.226488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:34:21.743 [2024-12-09 05:29:13.226493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:34:21.743 [2024-12-09 05:29:13.226498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:34:21.743 [2024-12-09 05:29:13.227275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:34:21.743 [2024-12-09 05:29:13.227415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:34:21.743 [2024-12-09 05:29:13.228357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:34:21.743 [2024-12-09 05:29:13.228407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:34:21.743 [2024-12-09 05:29:13.283762] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:34:21.743 [2024-12-09 05:29:13.284019] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:34:21.743 [2024-12-09 05:29:13.284030] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:34:21.743 [2024-12-09 05:29:13.284303] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:34:21.743 [2024-12-09 05:29:13.284900] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:34:22.676 05:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:34:22.676 05:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 01:34:22.676 05:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:34:22.676 05:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 01:34:22.676 05:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:34:22.676 05:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:34:22.676 05:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:34:22.676 [2024-12-09 05:29:14.153297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:34:22.676 05:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:34:22.933 Malloc0 01:34:22.933 05:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 01:34:23.190 05:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:34:23.448 05:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:34:23.448 [2024-12-09 05:29:15.025164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:34:23.448 05:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 01:34:23.713 [2024-12-09 05:29:15.225088] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 01:34:23.713 05:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 01:34:23.970 05:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 01:34:23.970 05:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 01:34:23.970 05:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 01:34:23.970 05:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:34:23.970 05:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:34:23.970 05:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 01:34:25.864 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:34:26.120 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:34:26.120 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:34:26.120 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:34:26.120 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:34:26.120 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 01:34:26.120 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 01:34:26.120 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 01:34:26.120 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 01:34:26.120 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 01:34:26.120 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=103722 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 01:34:26.121 05:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 01:34:26.121 [global] 01:34:26.121 thread=1 01:34:26.121 invalidate=1 01:34:26.121 rw=randrw 01:34:26.121 time_based=1 01:34:26.121 runtime=6 01:34:26.121 ioengine=libaio 01:34:26.121 direct=1 01:34:26.121 bs=4096 01:34:26.121 iodepth=128 01:34:26.121 norandommap=0 01:34:26.121 numjobs=1 01:34:26.121 01:34:26.121 verify_dump=1 01:34:26.121 verify_backlog=512 01:34:26.121 verify_state_save=0 01:34:26.121 do_verify=1 01:34:26.121 verify=crc32c-intel 01:34:26.121 [job0] 01:34:26.121 filename=/dev/nvme0n1 01:34:26.121 Could not set queue depth (nvme0n1) 01:34:26.121 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:34:26.121 fio-3.35 01:34:26.121 Starting 1 thread 01:34:27.052 05:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:34:27.310 05:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 01:34:27.310 05:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 01:34:27.310 05:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 01:34:27.310 05:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:34:27.310 05:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:34:27.310 05:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:34:27.310 05:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:34:27.310 05:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 01:34:27.310 05:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 01:34:27.310 05:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:34:27.310 05:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:34:27.310 05:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:34:27.310 05:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:34:27.310 05:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:34:28.681 05:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:34:28.682 05:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:34:28.682 05:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:34:28.682 05:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:34:28.682 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 01:34:28.939 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 01:34:28.939 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 01:34:28.939 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:34:28.939 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:34:28.939 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:34:28.939 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:34:28.939 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 01:34:28.939 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 01:34:28.939 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:34:28.939 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:34:28.939 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:34:28.939 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:34:28.939 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:34:29.874 05:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:34:29.874 05:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:34:29.874 05:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:34:29.874 05:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 103722 01:34:32.455 01:34:32.455 job0: (groupid=0, jobs=1): err= 0: pid=103748: Mon Dec 9 05:29:23 2024 01:34:32.455 read: IOPS=16.0k, BW=62.4MiB/s (65.5MB/s)(375MiB/6004msec) 01:34:32.455 slat (usec): min=2, max=4139, avg=37.07, stdev=193.98 01:34:32.455 clat (usec): min=1091, max=10706, avg=5391.10, stdev=830.17 01:34:32.455 lat (usec): min=1097, max=10712, avg=5428.16, stdev=843.70 01:34:32.455 clat percentiles (usec): 01:34:32.455 | 1.00th=[ 3326], 5.00th=[ 4113], 10.00th=[ 4555], 20.00th=[ 4883], 01:34:32.455 | 30.00th=[ 5080], 40.00th=[ 5211], 50.00th=[ 5342], 60.00th=[ 5473], 01:34:32.455 | 70.00th=[ 5604], 80.00th=[ 5800], 90.00th=[ 6259], 95.00th=[ 6980], 01:34:32.455 | 99.00th=[ 8094], 99.50th=[ 8356], 99.90th=[ 9241], 99.95th=[ 9765], 01:34:32.455 | 99.99th=[10290] 01:34:32.455 bw ( KiB/s): min=17888, max=42728, per=52.25%, avg=33405.33, stdev=6935.80, samples=12 01:34:32.455 iops : min= 4472, max=10682, avg=8351.33, stdev=1733.95, samples=12 01:34:32.455 write: IOPS=9507, BW=37.1MiB/s (38.9MB/s)(196MiB/5270msec); 0 zone resets 01:34:32.455 slat (usec): min=6, max=2449, avg=43.78, stdev=109.46 01:34:32.455 clat (usec): min=520, max=10721, avg=4878.51, stdev=622.56 01:34:32.455 lat (usec): min=535, max=10736, avg=4922.29, stdev=625.27 01:34:32.455 clat percentiles (usec): 01:34:32.455 | 1.00th=[ 2704], 5.00th=[ 3851], 10.00th=[ 4359], 20.00th=[ 4555], 01:34:32.455 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4948], 60.00th=[ 5014], 01:34:32.455 | 70.00th=[ 5145], 80.00th=[ 5211], 90.00th=[ 5407], 95.00th=[ 5538], 01:34:32.455 | 99.00th=[ 7177], 99.50th=[ 7570], 99.90th=[ 8356], 99.95th=[ 8717], 01:34:32.455 | 99.99th=[ 8979] 01:34:32.455 bw ( KiB/s): min=18880, max=41808, per=87.71%, avg=33356.00, stdev=6639.73, samples=12 01:34:32.455 iops : min= 4720, max=10452, avg=8339.00, stdev=1659.93, samples=12 01:34:32.455 lat (usec) : 750=0.01%, 1000=0.01% 01:34:32.455 lat (msec) : 2=0.12%, 4=4.60%, 10=95.25%, 20=0.03% 01:34:32.456 cpu : usr=3.65%, sys=17.72%, ctx=11075, majf=0, minf=163 01:34:32.456 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 01:34:32.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:34:32.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:34:32.456 issued rwts: total=95970,50102,0,0 short=0,0,0,0 dropped=0,0,0,0 01:34:32.456 latency : target=0, window=0, percentile=100.00%, depth=128 01:34:32.456 01:34:32.456 Run status group 0 (all jobs): 01:34:32.456 READ: bw=62.4MiB/s (65.5MB/s), 62.4MiB/s-62.4MiB/s (65.5MB/s-65.5MB/s), io=375MiB (393MB), run=6004-6004msec 01:34:32.456 WRITE: bw=37.1MiB/s (38.9MB/s), 37.1MiB/s-37.1MiB/s (38.9MB/s-38.9MB/s), io=196MiB (205MB), run=5270-5270msec 01:34:32.456 01:34:32.456 Disk stats (read/write): 01:34:32.456 nvme0n1: ios=94499/49314, merge=0/0, ticks=485400/231464, in_queue=716864, util=98.52% 01:34:32.456 05:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 01:34:32.456 05:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 01:34:32.714 05:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 01:34:32.714 05:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 01:34:32.714 05:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:34:32.714 05:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:34:32.714 05:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:34:32.714 05:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:34:32.714 05:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 01:34:32.714 05:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 01:34:32.714 05:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:34:32.714 05:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:34:32.714 05:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:34:32.714 05:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 01:34:32.714 05:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:34:33.646 05:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:34:33.646 05:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:34:33.646 05:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:34:33.647 05:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 01:34:33.647 05:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=103872 01:34:33.647 05:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 01:34:33.647 05:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 01:34:33.647 [global] 01:34:33.647 thread=1 01:34:33.647 invalidate=1 01:34:33.647 rw=randrw 01:34:33.647 time_based=1 01:34:33.647 runtime=6 01:34:33.647 ioengine=libaio 01:34:33.647 direct=1 01:34:33.647 bs=4096 01:34:33.647 iodepth=128 01:34:33.647 norandommap=0 01:34:33.647 numjobs=1 01:34:33.647 01:34:33.647 verify_dump=1 01:34:33.647 verify_backlog=512 01:34:33.647 verify_state_save=0 01:34:33.647 do_verify=1 01:34:33.647 verify=crc32c-intel 01:34:33.647 [job0] 01:34:33.647 filename=/dev/nvme0n1 01:34:33.647 Could not set queue depth (nvme0n1) 01:34:33.647 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:34:33.647 fio-3.35 01:34:33.647 Starting 1 thread 01:34:34.579 05:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:34:34.835 05:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 01:34:35.093 05:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 01:34:35.093 05:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 01:34:35.093 05:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:34:35.093 05:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:34:35.093 05:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:34:35.093 05:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:34:35.093 05:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 01:34:35.093 05:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 01:34:35.093 05:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:34:35.093 05:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:34:35.093 05:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:34:35.093 05:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:34:35.093 05:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:34:36.039 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:34:36.039 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:34:36.040 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:34:36.040 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:34:36.296 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 01:34:36.554 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 01:34:36.554 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 01:34:36.554 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:34:36.554 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:34:36.554 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:34:36.554 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:34:36.554 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 01:34:36.554 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 01:34:36.554 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:34:36.554 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:34:36.554 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:34:36.554 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:34:36.554 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:34:37.484 05:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:34:37.484 05:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:34:37.484 05:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:34:37.484 05:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 103872 01:34:40.008 01:34:40.008 job0: (groupid=0, jobs=1): err= 0: pid=103894: Mon Dec 9 05:29:31 2024 01:34:40.008 read: IOPS=16.2k, BW=63.1MiB/s (66.2MB/s)(379MiB/6005msec) 01:34:40.008 slat (nsec): min=1478, max=13272k, avg=32211.43, stdev=194913.88 01:34:40.008 clat (usec): min=144, max=49042, avg=5372.99, stdev=2387.55 01:34:40.008 lat (usec): min=152, max=49049, avg=5405.20, stdev=2398.88 01:34:40.008 clat percentiles (usec): 01:34:40.008 | 1.00th=[ 392], 5.00th=[ 1385], 10.00th=[ 3097], 20.00th=[ 4080], 01:34:40.008 | 30.00th=[ 4883], 40.00th=[ 5145], 50.00th=[ 5407], 60.00th=[ 5604], 01:34:40.008 | 70.00th=[ 5997], 80.00th=[ 6718], 90.00th=[ 7373], 95.00th=[ 7963], 01:34:40.008 | 99.00th=[10421], 99.50th=[11338], 99.90th=[44303], 99.95th=[46924], 01:34:40.008 | 99.99th=[49021] 01:34:40.008 bw ( KiB/s): min=13296, max=64896, per=52.19%, avg=33722.18, stdev=13126.98, samples=11 01:34:40.008 iops : min= 3324, max=16224, avg=8430.55, stdev=3281.74, samples=11 01:34:40.008 write: IOPS=10.0k, BW=39.1MiB/s (41.0MB/s)(202MiB/5167msec); 0 zone resets 01:34:40.008 slat (usec): min=2, max=3821, avg=37.62, stdev=103.67 01:34:40.008 clat (usec): min=90, max=15910, avg=4660.96, stdev=1680.50 01:34:40.008 lat (usec): min=105, max=15929, avg=4698.58, stdev=1690.81 01:34:40.008 clat percentiles (usec): 01:34:40.008 | 1.00th=[ 289], 5.00th=[ 807], 10.00th=[ 2474], 20.00th=[ 3195], 01:34:40.008 | 30.00th=[ 4424], 40.00th=[ 4752], 50.00th=[ 4948], 60.00th=[ 5145], 01:34:40.008 | 70.00th=[ 5342], 80.00th=[ 5932], 90.00th=[ 6587], 95.00th=[ 6980], 01:34:40.008 | 99.00th=[ 7701], 99.50th=[ 8848], 99.90th=[10683], 99.95th=[11338], 01:34:40.008 | 99.99th=[15008] 01:34:40.008 bw ( KiB/s): min=13816, max=65416, per=84.31%, avg=33748.36, stdev=12979.82, samples=11 01:34:40.008 iops : min= 3454, max=16354, avg=8437.09, stdev=3244.95, samples=11 01:34:40.009 lat (usec) : 100=0.01%, 250=0.42%, 500=1.94%, 750=1.54%, 1000=0.76% 01:34:40.009 lat (msec) : 2=1.71%, 4=15.03%, 10=77.76%, 20=0.71%, 50=0.13% 01:34:40.009 cpu : usr=3.45%, sys=17.97%, ctx=13529, majf=0, minf=139 01:34:40.009 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 01:34:40.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:34:40.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:34:40.009 issued rwts: total=97008,51704,0,0 short=0,0,0,0 dropped=0,0,0,0 01:34:40.009 latency : target=0, window=0, percentile=100.00%, depth=128 01:34:40.009 01:34:40.009 Run status group 0 (all jobs): 01:34:40.009 READ: bw=63.1MiB/s (66.2MB/s), 63.1MiB/s-63.1MiB/s (66.2MB/s-66.2MB/s), io=379MiB (397MB), run=6005-6005msec 01:34:40.009 WRITE: bw=39.1MiB/s (41.0MB/s), 39.1MiB/s-39.1MiB/s (41.0MB/s-41.0MB/s), io=202MiB (212MB), run=5167-5167msec 01:34:40.009 01:34:40.009 Disk stats (read/write): 01:34:40.009 nvme0n1: ios=95829/50645, merge=0/0, ticks=485317/226761, in_queue=712078, util=98.53% 01:34:40.009 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:34:40.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 01:34:40.009 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:34:40.009 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 01:34:40.009 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:34:40.009 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:34:40.009 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:34:40.009 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:34:40.009 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 01:34:40.009 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:34:40.267 rmmod nvme_tcp 01:34:40.267 rmmod nvme_fabrics 01:34:40.267 rmmod nvme_keyring 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 103590 ']' 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 103590 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 103590 ']' 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 103590 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103590 01:34:40.267 killing process with pid 103590 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103590' 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 103590 01:34:40.267 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 103590 01:34:40.525 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:34:40.525 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:34:40.525 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:34:40.525 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 01:34:40.525 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:34:40.525 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 01:34:40.525 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 01:34:40.525 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:34:40.525 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:34:40.525 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:34:40.525 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:34:40.525 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:34:40.525 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:34:40.525 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:34:40.525 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:34:40.525 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:34:40.525 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:34:40.525 05:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:34:40.525 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:34:40.525 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:34:40.525 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:34:40.525 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:34:40.525 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 01:34:40.525 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:34:40.525 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:34:40.525 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:34:40.525 ************************************ 01:34:40.525 END TEST nvmf_target_multipath 01:34:40.525 ************************************ 01:34:40.525 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 01:34:40.525 01:34:40.525 real 0m19.542s 01:34:40.525 user 1m7.837s 01:34:40.525 sys 0m6.685s 01:34:40.525 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 01:34:40.525 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:34:40.787 ************************************ 01:34:40.787 START TEST nvmf_zcopy 01:34:40.787 ************************************ 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 01:34:40.787 * Looking for test storage... 01:34:40.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:34:40.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:34:40.787 --rc genhtml_branch_coverage=1 01:34:40.787 --rc genhtml_function_coverage=1 01:34:40.787 --rc genhtml_legend=1 01:34:40.787 --rc geninfo_all_blocks=1 01:34:40.787 --rc geninfo_unexecuted_blocks=1 01:34:40.787 01:34:40.787 ' 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:34:40.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:34:40.787 --rc genhtml_branch_coverage=1 01:34:40.787 --rc genhtml_function_coverage=1 01:34:40.787 --rc genhtml_legend=1 01:34:40.787 --rc geninfo_all_blocks=1 01:34:40.787 --rc geninfo_unexecuted_blocks=1 01:34:40.787 01:34:40.787 ' 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:34:40.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:34:40.787 --rc genhtml_branch_coverage=1 01:34:40.787 --rc genhtml_function_coverage=1 01:34:40.787 --rc genhtml_legend=1 01:34:40.787 --rc geninfo_all_blocks=1 01:34:40.787 --rc geninfo_unexecuted_blocks=1 01:34:40.787 01:34:40.787 ' 01:34:40.787 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:34:40.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:34:40.787 --rc genhtml_branch_coverage=1 01:34:40.787 --rc genhtml_function_coverage=1 01:34:40.787 --rc genhtml_legend=1 01:34:40.787 --rc geninfo_all_blocks=1 01:34:40.787 --rc geninfo_unexecuted_blocks=1 01:34:40.787 01:34:40.787 ' 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:34:40.788 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:34:40.789 Cannot find device "nvmf_init_br" 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:34:40.789 Cannot find device "nvmf_init_br2" 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:34:40.789 Cannot find device "nvmf_tgt_br" 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # true 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:34:40.789 Cannot find device "nvmf_tgt_br2" 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # true 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:34:40.789 Cannot find device "nvmf_init_br" 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:34:40.789 Cannot find device "nvmf_init_br2" 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:34:40.789 Cannot find device "nvmf_tgt_br" 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # true 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:34:40.789 Cannot find device "nvmf_tgt_br2" 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # true 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:34:40.789 Cannot find device "nvmf_br" 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # true 01:34:40.789 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:34:41.050 Cannot find device "nvmf_init_if" 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # true 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:34:41.050 Cannot find device "nvmf_init_if2" 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # true 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:34:41.050 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # true 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:34:41.050 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # true 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:34:41.050 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:34:41.051 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:34:41.051 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 01:34:41.051 01:34:41.051 --- 10.0.0.3 ping statistics --- 01:34:41.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:34:41.051 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:34:41.051 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:34:41.051 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 01:34:41.051 01:34:41.051 --- 10.0.0.4 ping statistics --- 01:34:41.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:34:41.051 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:34:41.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:34:41.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 01:34:41.051 01:34:41.051 --- 10.0.0.1 ping statistics --- 01:34:41.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:34:41.051 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:34:41.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:34:41.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 01:34:41.051 01:34:41.051 --- 10.0.0.2 ping statistics --- 01:34:41.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:34:41.051 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:34:41.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=104220 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 104220 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 104220 ']' 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:34:41.051 05:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 01:34:41.312 [2024-12-09 05:29:32.671649] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:34:41.313 [2024-12-09 05:29:32.672411] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:41.313 [2024-12-09 05:29:32.672454] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:34:41.313 [2024-12-09 05:29:32.806039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:34:41.313 [2024-12-09 05:29:32.847329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:34:41.313 [2024-12-09 05:29:32.847372] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:34:41.313 [2024-12-09 05:29:32.847378] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:34:41.313 [2024-12-09 05:29:32.847383] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:34:41.313 [2024-12-09 05:29:32.847388] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:34:41.313 [2024-12-09 05:29:32.847672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:34:41.313 [2024-12-09 05:29:32.911892] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:34:41.313 [2024-12-09 05:29:32.912149] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:34:42.253 [2024-12-09 05:29:33.620249] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:34:42.253 [2024-12-09 05:29:33.636449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:34:42.253 malloc0 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:34:42.253 { 01:34:42.253 "params": { 01:34:42.253 "name": "Nvme$subsystem", 01:34:42.253 "trtype": "$TEST_TRANSPORT", 01:34:42.253 "traddr": "$NVMF_FIRST_TARGET_IP", 01:34:42.253 "adrfam": "ipv4", 01:34:42.253 "trsvcid": "$NVMF_PORT", 01:34:42.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:34:42.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:34:42.253 "hdgst": ${hdgst:-false}, 01:34:42.253 "ddgst": ${ddgst:-false} 01:34:42.253 }, 01:34:42.253 "method": "bdev_nvme_attach_controller" 01:34:42.253 } 01:34:42.253 EOF 01:34:42.253 )") 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 01:34:42.253 05:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:34:42.253 "params": { 01:34:42.253 "name": "Nvme1", 01:34:42.253 "trtype": "tcp", 01:34:42.253 "traddr": "10.0.0.3", 01:34:42.253 "adrfam": "ipv4", 01:34:42.253 "trsvcid": "4420", 01:34:42.253 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:34:42.253 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:34:42.253 "hdgst": false, 01:34:42.253 "ddgst": false 01:34:42.253 }, 01:34:42.253 "method": "bdev_nvme_attach_controller" 01:34:42.253 }' 01:34:42.253 [2024-12-09 05:29:33.711815] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:42.253 [2024-12-09 05:29:33.711897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104271 ] 01:34:42.253 [2024-12-09 05:29:33.850484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:34:42.513 [2024-12-09 05:29:33.889006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:34:42.513 Running I/O for 10 seconds... 01:34:44.820 6904.00 IOPS, 53.94 MiB/s [2024-12-09T05:29:37.370Z] 6969.50 IOPS, 54.45 MiB/s [2024-12-09T05:29:38.305Z] 6984.00 IOPS, 54.56 MiB/s [2024-12-09T05:29:39.239Z] 7023.25 IOPS, 54.87 MiB/s [2024-12-09T05:29:40.173Z] 7106.40 IOPS, 55.52 MiB/s [2024-12-09T05:29:41.108Z] 7443.17 IOPS, 58.15 MiB/s [2024-12-09T05:29:42.040Z] 7679.29 IOPS, 59.99 MiB/s [2024-12-09T05:29:43.411Z] 7859.12 IOPS, 61.40 MiB/s [2024-12-09T05:29:44.343Z] 7998.89 IOPS, 62.49 MiB/s [2024-12-09T05:29:44.343Z] 8111.90 IOPS, 63.37 MiB/s 01:34:52.726 Latency(us) 01:34:52.726 [2024-12-09T05:29:44.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:34:52.726 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 01:34:52.726 Verification LBA range: start 0x0 length 0x1000 01:34:52.727 Nvme1n1 : 10.01 8115.93 63.41 0.00 0.00 15728.62 2344.17 24702.03 01:34:52.727 [2024-12-09T05:29:44.344Z] =================================================================================================================== 01:34:52.727 [2024-12-09T05:29:44.344Z] Total : 8115.93 63.41 0.00 0.00 15728.62 2344.17 24702.03 01:34:52.727 05:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 01:34:52.727 05:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=104389 01:34:52.727 05:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 01:34:52.727 05:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 01:34:52.727 05:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:34:52.727 05:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 01:34:52.727 05:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 01:34:52.727 05:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:34:52.727 05:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:34:52.727 { 01:34:52.727 "params": { 01:34:52.727 "name": "Nvme$subsystem", 01:34:52.727 "trtype": "$TEST_TRANSPORT", 01:34:52.727 "traddr": "$NVMF_FIRST_TARGET_IP", 01:34:52.727 "adrfam": "ipv4", 01:34:52.727 "trsvcid": "$NVMF_PORT", 01:34:52.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:34:52.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:34:52.727 "hdgst": ${hdgst:-false}, 01:34:52.727 "ddgst": ${ddgst:-false} 01:34:52.727 }, 01:34:52.727 "method": "bdev_nvme_attach_controller" 01:34:52.727 } 01:34:52.727 EOF 01:34:52.727 )") 01:34:52.727 05:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 01:34:52.727 [2024-12-09 05:29:44.176016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.727 [2024-12-09 05:29:44.176048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.727 05:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 01:34:52.727 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.727 05:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 01:34:52.727 05:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:34:52.727 "params": { 01:34:52.727 "name": "Nvme1", 01:34:52.727 "trtype": "tcp", 01:34:52.727 "traddr": "10.0.0.3", 01:34:52.727 "adrfam": "ipv4", 01:34:52.727 "trsvcid": "4420", 01:34:52.727 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:34:52.727 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:34:52.727 "hdgst": false, 01:34:52.727 "ddgst": false 01:34:52.727 }, 01:34:52.727 "method": "bdev_nvme_attach_controller" 01:34:52.727 }' 01:34:52.727 [2024-12-09 05:29:44.183988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.727 [2024-12-09 05:29:44.184001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.727 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.727 [2024-12-09 05:29:44.191983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.727 [2024-12-09 05:29:44.191999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.727 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.727 [2024-12-09 05:29:44.198782] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:34:52.727 [2024-12-09 05:29:44.198839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104389 ] 01:34:52.727 [2024-12-09 05:29:44.199983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.727 [2024-12-09 05:29:44.199996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.727 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.727 [2024-12-09 05:29:44.207984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.727 [2024-12-09 05:29:44.207998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.727 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.727 [2024-12-09 05:29:44.215984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.727 [2024-12-09 05:29:44.216000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.727 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.727 [2024-12-09 05:29:44.223984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.727 [2024-12-09 05:29:44.224003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.727 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.727 [2024-12-09 05:29:44.231985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.727 [2024-12-09 05:29:44.232002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.727 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.727 [2024-12-09 05:29:44.239985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.727 [2024-12-09 05:29:44.240002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.727 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.727 [2024-12-09 05:29:44.247986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.727 [2024-12-09 05:29:44.248003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.727 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.727 [2024-12-09 05:29:44.255984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.727 [2024-12-09 05:29:44.256002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.727 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.727 [2024-12-09 05:29:44.263985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.727 [2024-12-09 05:29:44.264002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.727 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.727 [2024-12-09 05:29:44.271985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.727 [2024-12-09 05:29:44.272003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.727 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.727 [2024-12-09 05:29:44.279985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.727 [2024-12-09 05:29:44.280002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.727 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.727 [2024-12-09 05:29:44.287985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.727 [2024-12-09 05:29:44.288002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.727 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.727 [2024-12-09 05:29:44.295985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.727 [2024-12-09 05:29:44.296001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.727 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.727 [2024-12-09 05:29:44.303985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.727 [2024-12-09 05:29:44.304002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.728 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.728 [2024-12-09 05:29:44.315985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.728 [2024-12-09 05:29:44.316000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.728 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.728 [2024-12-09 05:29:44.323982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.728 [2024-12-09 05:29:44.323999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.728 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.728 [2024-12-09 05:29:44.331616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:34:52.728 [2024-12-09 05:29:44.331982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.728 [2024-12-09 05:29:44.331997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.728 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.728 [2024-12-09 05:29:44.339983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.728 [2024-12-09 05:29:44.340001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.986 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.986 [2024-12-09 05:29:44.347983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.986 [2024-12-09 05:29:44.348001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.986 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.986 [2024-12-09 05:29:44.355983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.986 [2024-12-09 05:29:44.356001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.986 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.986 [2024-12-09 05:29:44.361420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:34:52.986 [2024-12-09 05:29:44.363982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.986 [2024-12-09 05:29:44.363999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.986 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.986 [2024-12-09 05:29:44.371983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.986 [2024-12-09 05:29:44.371999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.986 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.986 [2024-12-09 05:29:44.379987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.986 [2024-12-09 05:29:44.380006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.986 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.986 [2024-12-09 05:29:44.387985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.986 [2024-12-09 05:29:44.388003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.986 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.986 [2024-12-09 05:29:44.395983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.986 [2024-12-09 05:29:44.396002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.986 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.986 [2024-12-09 05:29:44.403983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.986 [2024-12-09 05:29:44.404000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.986 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.986 [2024-12-09 05:29:44.411983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.986 [2024-12-09 05:29:44.412001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.986 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.986 [2024-12-09 05:29:44.419982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.986 [2024-12-09 05:29:44.419998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.986 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.986 [2024-12-09 05:29:44.427991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.986 [2024-12-09 05:29:44.428015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.986 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.986 [2024-12-09 05:29:44.435987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.986 [2024-12-09 05:29:44.436005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.986 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.986 [2024-12-09 05:29:44.443985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.986 [2024-12-09 05:29:44.444004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.987 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.987 [2024-12-09 05:29:44.451987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.987 [2024-12-09 05:29:44.452006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.987 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.987 [2024-12-09 05:29:44.459986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.987 [2024-12-09 05:29:44.460004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.987 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.987 [2024-12-09 05:29:44.467986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.987 [2024-12-09 05:29:44.468005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.987 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.987 [2024-12-09 05:29:44.479995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.987 [2024-12-09 05:29:44.480015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.987 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.987 [2024-12-09 05:29:44.487985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.987 [2024-12-09 05:29:44.488003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.987 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.987 Running I/O for 5 seconds... 01:34:52.987 [2024-12-09 05:29:44.501259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.987 [2024-12-09 05:29:44.501285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.987 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.987 [2024-12-09 05:29:44.512037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.987 [2024-12-09 05:29:44.512063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.987 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.987 [2024-12-09 05:29:44.518341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.987 [2024-12-09 05:29:44.518366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.987 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.987 [2024-12-09 05:29:44.531341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.987 [2024-12-09 05:29:44.531368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.987 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.987 [2024-12-09 05:29:44.545181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.987 [2024-12-09 05:29:44.545205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.987 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.987 [2024-12-09 05:29:44.559254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.987 [2024-12-09 05:29:44.559279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.987 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.987 [2024-12-09 05:29:44.573005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.987 [2024-12-09 05:29:44.573030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.987 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.987 [2024-12-09 05:29:44.583066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.987 [2024-12-09 05:29:44.583090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.987 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:52.987 [2024-12-09 05:29:44.596931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:52.987 [2024-12-09 05:29:44.596956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:52.987 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.245 [2024-12-09 05:29:44.607603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.245 [2024-12-09 05:29:44.607626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.245 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.245 [2024-12-09 05:29:44.621589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.245 [2024-12-09 05:29:44.621613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.245 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.245 [2024-12-09 05:29:44.630807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.245 [2024-12-09 05:29:44.630832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.245 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.245 [2024-12-09 05:29:44.644458] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.245 [2024-12-09 05:29:44.644481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.245 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.245 [2024-12-09 05:29:44.657635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.245 [2024-12-09 05:29:44.657660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.245 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.245 [2024-12-09 05:29:44.666014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.245 [2024-12-09 05:29:44.666040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.245 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.245 [2024-12-09 05:29:44.675958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.245 [2024-12-09 05:29:44.675981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.245 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.245 [2024-12-09 05:29:44.682155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.245 [2024-12-09 05:29:44.682180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.245 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.245 [2024-12-09 05:29:44.691519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.245 [2024-12-09 05:29:44.691544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.245 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.246 [2024-12-09 05:29:44.705589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.246 [2024-12-09 05:29:44.705613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.246 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.246 [2024-12-09 05:29:44.715042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.246 [2024-12-09 05:29:44.715066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.246 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.246 [2024-12-09 05:29:44.728683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.246 [2024-12-09 05:29:44.728706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.246 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.246 [2024-12-09 05:29:44.739546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.246 [2024-12-09 05:29:44.739580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.246 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.246 [2024-12-09 05:29:44.753254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.246 [2024-12-09 05:29:44.753279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.246 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.246 [2024-12-09 05:29:44.763978] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.246 [2024-12-09 05:29:44.764003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.246 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.246 [2024-12-09 05:29:44.770234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.246 [2024-12-09 05:29:44.770258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.246 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.246 [2024-12-09 05:29:44.779072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.246 [2024-12-09 05:29:44.779097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.246 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.246 [2024-12-09 05:29:44.793172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.246 [2024-12-09 05:29:44.793197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.246 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.246 [2024-12-09 05:29:44.804556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.246 [2024-12-09 05:29:44.804587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.246 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.246 [2024-12-09 05:29:44.817546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.246 [2024-12-09 05:29:44.817580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.246 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.246 [2024-12-09 05:29:44.827734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.246 [2024-12-09 05:29:44.827758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.246 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.246 [2024-12-09 05:29:44.841357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.246 [2024-12-09 05:29:44.841381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.246 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.246 [2024-12-09 05:29:44.850535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.246 [2024-12-09 05:29:44.850574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.246 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.246 [2024-12-09 05:29:44.856875] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.246 [2024-12-09 05:29:44.856899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.246 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.504 [2024-12-09 05:29:44.866629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.504 [2024-12-09 05:29:44.866653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.504 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.504 [2024-12-09 05:29:44.880543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.504 [2024-12-09 05:29:44.880574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.504 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.504 [2024-12-09 05:29:44.891241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.504 [2024-12-09 05:29:44.891266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.504 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.504 [2024-12-09 05:29:44.905030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.504 [2024-12-09 05:29:44.905052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.504 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.504 [2024-12-09 05:29:44.915516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.504 [2024-12-09 05:29:44.915541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.504 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.504 [2024-12-09 05:29:44.929081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.504 [2024-12-09 05:29:44.929104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.504 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.504 [2024-12-09 05:29:44.940762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.504 [2024-12-09 05:29:44.940784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.504 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.504 [2024-12-09 05:29:44.953496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.504 [2024-12-09 05:29:44.953521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.504 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.504 [2024-12-09 05:29:44.962056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.504 [2024-12-09 05:29:44.962080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.504 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.504 [2024-12-09 05:29:44.972257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.504 [2024-12-09 05:29:44.972280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.504 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.504 [2024-12-09 05:29:44.985310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.504 [2024-12-09 05:29:44.985334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.504 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.504 [2024-12-09 05:29:44.995379] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.504 [2024-12-09 05:29:44.995402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.504 2024/12/09 05:29:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.504 [2024-12-09 05:29:45.009266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.504 [2024-12-09 05:29:45.009291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.504 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.504 [2024-12-09 05:29:45.019504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.504 [2024-12-09 05:29:45.019527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.504 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.504 [2024-12-09 05:29:45.033260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.504 [2024-12-09 05:29:45.033284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.504 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.504 [2024-12-09 05:29:45.042743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.504 [2024-12-09 05:29:45.042767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.504 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.504 [2024-12-09 05:29:45.056584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.504 [2024-12-09 05:29:45.056608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.504 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.504 [2024-12-09 05:29:45.071276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.504 [2024-12-09 05:29:45.071302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.504 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.504 [2024-12-09 05:29:45.085457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.504 [2024-12-09 05:29:45.085482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.504 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.504 [2024-12-09 05:29:45.094747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.504 [2024-12-09 05:29:45.094772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.504 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.504 [2024-12-09 05:29:45.108752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.504 [2024-12-09 05:29:45.108775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.504 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.762 [2024-12-09 05:29:45.123255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.762 [2024-12-09 05:29:45.123280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.762 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.762 [2024-12-09 05:29:45.137007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.762 [2024-12-09 05:29:45.137031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.762 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.762 [2024-12-09 05:29:45.147557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.762 [2024-12-09 05:29:45.147590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.762 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.762 [2024-12-09 05:29:45.161078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.762 [2024-12-09 05:29:45.161102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.762 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.762 [2024-12-09 05:29:45.172221] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.762 [2024-12-09 05:29:45.172243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.762 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.762 [2024-12-09 05:29:45.178381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.762 [2024-12-09 05:29:45.178404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.762 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.762 [2024-12-09 05:29:45.186244] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.762 [2024-12-09 05:29:45.186267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.762 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.762 [2024-12-09 05:29:45.194770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.762 [2024-12-09 05:29:45.194793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.762 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.762 [2024-12-09 05:29:45.208389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.762 [2024-12-09 05:29:45.208413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.762 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.762 [2024-12-09 05:29:45.219834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.762 [2024-12-09 05:29:45.219857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.762 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.762 [2024-12-09 05:29:45.232999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.762 [2024-12-09 05:29:45.233023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.762 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.762 [2024-12-09 05:29:45.245533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.762 [2024-12-09 05:29:45.245558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.762 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.762 [2024-12-09 05:29:45.255836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.762 [2024-12-09 05:29:45.255861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.762 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.762 [2024-12-09 05:29:45.269639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.762 [2024-12-09 05:29:45.269663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.762 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.762 [2024-12-09 05:29:45.283310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.762 [2024-12-09 05:29:45.283334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.762 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.762 [2024-12-09 05:29:45.297339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.762 [2024-12-09 05:29:45.297362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.762 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.762 [2024-12-09 05:29:45.307827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.762 [2024-12-09 05:29:45.307852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.762 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.762 [2024-12-09 05:29:45.321498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.762 [2024-12-09 05:29:45.321523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.763 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.763 [2024-12-09 05:29:45.331624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.763 [2024-12-09 05:29:45.331647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.763 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.763 [2024-12-09 05:29:45.345294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.763 [2024-12-09 05:29:45.345319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.763 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.763 [2024-12-09 05:29:45.355388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.763 [2024-12-09 05:29:45.355413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.763 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:53.763 [2024-12-09 05:29:45.369349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:53.763 [2024-12-09 05:29:45.369374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:53.763 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.021 [2024-12-09 05:29:45.380164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.021 [2024-12-09 05:29:45.380188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.021 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.021 [2024-12-09 05:29:45.386222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.021 [2024-12-09 05:29:45.386248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.021 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.021 [2024-12-09 05:29:45.395726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.021 [2024-12-09 05:29:45.395750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.021 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.021 [2024-12-09 05:29:45.409653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.021 [2024-12-09 05:29:45.409677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.021 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.021 [2024-12-09 05:29:45.418834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.021 [2024-12-09 05:29:45.418858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.021 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.021 [2024-12-09 05:29:45.432336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.021 [2024-12-09 05:29:45.432372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.021 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.021 [2024-12-09 05:29:45.443023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.021 [2024-12-09 05:29:45.443047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.021 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.021 [2024-12-09 05:29:45.456918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.021 [2024-12-09 05:29:45.456942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.021 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.022 [2024-12-09 05:29:45.471574] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.022 [2024-12-09 05:29:45.471598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.022 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.022 [2024-12-09 05:29:45.485334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.022 [2024-12-09 05:29:45.485358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.022 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.022 17939.00 IOPS, 140.15 MiB/s [2024-12-09T05:29:45.639Z] [2024-12-09 05:29:45.494721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.022 [2024-12-09 05:29:45.494744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.022 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.022 [2024-12-09 05:29:45.508234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.022 [2024-12-09 05:29:45.508258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.022 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.022 [2024-12-09 05:29:45.515283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.022 [2024-12-09 05:29:45.515309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.022 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.022 [2024-12-09 05:29:45.529057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.022 [2024-12-09 05:29:45.529080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.022 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.022 [2024-12-09 05:29:45.539615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.022 [2024-12-09 05:29:45.539638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.022 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.022 [2024-12-09 05:29:45.553312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.022 [2024-12-09 05:29:45.553336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.022 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.022 [2024-12-09 05:29:45.562614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.022 [2024-12-09 05:29:45.562638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.022 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.022 [2024-12-09 05:29:45.576124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.022 [2024-12-09 05:29:45.576147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.022 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.022 [2024-12-09 05:29:45.583251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.022 [2024-12-09 05:29:45.583275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.022 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.022 [2024-12-09 05:29:45.596847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.022 [2024-12-09 05:29:45.596870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.022 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.022 [2024-12-09 05:29:45.609395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.022 [2024-12-09 05:29:45.609418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.022 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.022 [2024-12-09 05:29:45.621306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.022 [2024-12-09 05:29:45.621329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.022 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.022 [2024-12-09 05:29:45.632999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.022 [2024-12-09 05:29:45.633025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.022 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.281 [2024-12-09 05:29:45.645510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.281 [2024-12-09 05:29:45.645533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.281 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.281 [2024-12-09 05:29:45.655935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.281 [2024-12-09 05:29:45.655956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.281 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.281 [2024-12-09 05:29:45.662212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.281 [2024-12-09 05:29:45.662235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.281 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.281 [2024-12-09 05:29:45.671275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.281 [2024-12-09 05:29:45.671300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.281 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.281 [2024-12-09 05:29:45.685316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.281 [2024-12-09 05:29:45.685340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.281 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.281 [2024-12-09 05:29:45.696000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.281 [2024-12-09 05:29:45.696023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.281 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.281 [2024-12-09 05:29:45.702388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.281 [2024-12-09 05:29:45.702413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.281 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.281 [2024-12-09 05:29:45.711604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.281 [2024-12-09 05:29:45.711627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.281 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.281 [2024-12-09 05:29:45.725687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.281 [2024-12-09 05:29:45.725711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.281 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.281 [2024-12-09 05:29:45.736240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.281 [2024-12-09 05:29:45.736264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.281 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.282 [2024-12-09 05:29:45.749406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.282 [2024-12-09 05:29:45.749431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.282 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.282 [2024-12-09 05:29:45.759787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.282 [2024-12-09 05:29:45.759811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.282 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.282 [2024-12-09 05:29:45.773677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.282 [2024-12-09 05:29:45.773701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.282 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.282 [2024-12-09 05:29:45.782874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.282 [2024-12-09 05:29:45.782898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.282 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.282 [2024-12-09 05:29:45.796473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.282 [2024-12-09 05:29:45.796497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.282 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.282 [2024-12-09 05:29:45.808687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.282 [2024-12-09 05:29:45.808709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.282 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.282 [2024-12-09 05:29:45.821318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.282 [2024-12-09 05:29:45.821343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.282 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.282 [2024-12-09 05:29:45.830658] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.282 [2024-12-09 05:29:45.830682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.282 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.282 [2024-12-09 05:29:45.844287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.282 [2024-12-09 05:29:45.844311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.282 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.282 [2024-12-09 05:29:45.855482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.282 [2024-12-09 05:29:45.855507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.282 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.282 [2024-12-09 05:29:45.869325] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.282 [2024-12-09 05:29:45.869350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.282 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.282 [2024-12-09 05:29:45.883558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.282 [2024-12-09 05:29:45.883589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.282 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.541 [2024-12-09 05:29:45.897687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.541 [2024-12-09 05:29:45.897712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.541 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.541 [2024-12-09 05:29:45.907039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.541 [2024-12-09 05:29:45.907060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.541 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.541 [2024-12-09 05:29:45.921052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.541 [2024-12-09 05:29:45.921076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.541 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.541 [2024-12-09 05:29:45.931313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.541 [2024-12-09 05:29:45.931337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.541 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.541 [2024-12-09 05:29:45.945010] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.541 [2024-12-09 05:29:45.945033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.541 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.541 [2024-12-09 05:29:45.955352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.541 [2024-12-09 05:29:45.955376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.541 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.541 [2024-12-09 05:29:45.969529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.541 [2024-12-09 05:29:45.969553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.541 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.541 [2024-12-09 05:29:45.983005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.541 [2024-12-09 05:29:45.983029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.541 2024/12/09 05:29:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.541 [2024-12-09 05:29:45.997037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.541 [2024-12-09 05:29:45.997060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.541 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.541 [2024-12-09 05:29:46.008320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.541 [2024-12-09 05:29:46.008355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.541 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.541 [2024-12-09 05:29:46.021408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.541 [2024-12-09 05:29:46.021432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.541 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.541 [2024-12-09 05:29:46.031282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.542 [2024-12-09 05:29:46.031305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.542 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.542 [2024-12-09 05:29:46.045008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.542 [2024-12-09 05:29:46.045031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.542 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.542 [2024-12-09 05:29:46.055912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.542 [2024-12-09 05:29:46.055940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.542 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.542 [2024-12-09 05:29:46.062151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.542 [2024-12-09 05:29:46.062175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.542 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.542 [2024-12-09 05:29:46.069982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.542 [2024-12-09 05:29:46.070005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.542 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.542 [2024-12-09 05:29:46.079218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.542 [2024-12-09 05:29:46.079242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.542 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.542 [2024-12-09 05:29:46.093021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.542 [2024-12-09 05:29:46.093044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.542 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.542 [2024-12-09 05:29:46.103579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.542 [2024-12-09 05:29:46.103602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.542 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.542 [2024-12-09 05:29:46.117415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.542 [2024-12-09 05:29:46.117440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.542 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.542 [2024-12-09 05:29:46.126838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.542 [2024-12-09 05:29:46.126863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.542 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.542 [2024-12-09 05:29:46.140485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.542 [2024-12-09 05:29:46.140509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.542 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.542 [2024-12-09 05:29:46.150941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.542 [2024-12-09 05:29:46.150964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.542 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.801 [2024-12-09 05:29:46.164672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.801 [2024-12-09 05:29:46.164694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.801 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.801 [2024-12-09 05:29:46.177778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.801 [2024-12-09 05:29:46.177799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.801 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.801 [2024-12-09 05:29:46.186860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.801 [2024-12-09 05:29:46.186882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.801 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.801 [2024-12-09 05:29:46.200651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.801 [2024-12-09 05:29:46.200673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.801 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.801 [2024-12-09 05:29:46.211835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.801 [2024-12-09 05:29:46.211859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.801 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.801 [2024-12-09 05:29:46.225544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.801 [2024-12-09 05:29:46.225576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.801 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.801 [2024-12-09 05:29:46.234877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.801 [2024-12-09 05:29:46.234901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.802 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.802 [2024-12-09 05:29:46.248579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.802 [2024-12-09 05:29:46.248602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.802 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.802 [2024-12-09 05:29:46.260301] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.802 [2024-12-09 05:29:46.260324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.802 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.802 [2024-12-09 05:29:46.273521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.802 [2024-12-09 05:29:46.273545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.802 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.802 [2024-12-09 05:29:46.283308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.802 [2024-12-09 05:29:46.283333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.802 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.802 [2024-12-09 05:29:46.297123] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.802 [2024-12-09 05:29:46.297146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.802 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.802 [2024-12-09 05:29:46.307503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.802 [2024-12-09 05:29:46.307526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.802 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.802 [2024-12-09 05:29:46.321381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.802 [2024-12-09 05:29:46.321406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.802 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.802 [2024-12-09 05:29:46.330768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.802 [2024-12-09 05:29:46.330793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.802 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.802 [2024-12-09 05:29:46.344315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.802 [2024-12-09 05:29:46.344350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.802 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.802 [2024-12-09 05:29:46.355713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.802 [2024-12-09 05:29:46.355737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.802 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.802 [2024-12-09 05:29:46.369504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.802 [2024-12-09 05:29:46.369529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.802 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.802 [2024-12-09 05:29:46.380476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.802 [2024-12-09 05:29:46.380498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.802 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.802 [2024-12-09 05:29:46.393538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.802 [2024-12-09 05:29:46.393569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.802 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.802 [2024-12-09 05:29:46.403311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.802 [2024-12-09 05:29:46.403335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:54.802 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:54.802 [2024-12-09 05:29:46.417155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:54.802 [2024-12-09 05:29:46.417178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.061 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.061 [2024-12-09 05:29:46.427723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.061 [2024-12-09 05:29:46.427747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.061 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.061 [2024-12-09 05:29:46.441580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.061 [2024-12-09 05:29:46.441604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.061 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.061 [2024-12-09 05:29:46.451463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.061 [2024-12-09 05:29:46.451487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.061 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.062 [2024-12-09 05:29:46.465116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.062 [2024-12-09 05:29:46.465140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.062 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.062 [2024-12-09 05:29:46.475636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.062 [2024-12-09 05:29:46.475661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.062 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.062 [2024-12-09 05:29:46.489637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.062 [2024-12-09 05:29:46.489660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.062 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.062 17914.00 IOPS, 139.95 MiB/s [2024-12-09T05:29:46.679Z] [2024-12-09 05:29:46.500328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.062 [2024-12-09 05:29:46.500362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.062 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.062 [2024-12-09 05:29:46.513416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.062 [2024-12-09 05:29:46.513440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.062 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.062 [2024-12-09 05:29:46.522466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.062 [2024-12-09 05:29:46.522490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.062 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.062 [2024-12-09 05:29:46.536192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.062 [2024-12-09 05:29:46.536215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.062 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.062 [2024-12-09 05:29:46.543378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.062 [2024-12-09 05:29:46.543402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.062 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.062 [2024-12-09 05:29:46.556759] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.062 [2024-12-09 05:29:46.556782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.062 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.062 [2024-12-09 05:29:46.568239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.062 [2024-12-09 05:29:46.568261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.062 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.062 [2024-12-09 05:29:46.574557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.062 [2024-12-09 05:29:46.574588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.062 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.062 [2024-12-09 05:29:46.582234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.062 [2024-12-09 05:29:46.582257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.062 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.062 [2024-12-09 05:29:46.596319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.062 [2024-12-09 05:29:46.596353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.062 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.062 [2024-12-09 05:29:46.607188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.062 [2024-12-09 05:29:46.607211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.062 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.062 [2024-12-09 05:29:46.620942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.062 [2024-12-09 05:29:46.620966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.062 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.062 [2024-12-09 05:29:46.631872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.062 [2024-12-09 05:29:46.631895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.062 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.062 [2024-12-09 05:29:46.644988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.062 [2024-12-09 05:29:46.645012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.062 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.062 [2024-12-09 05:29:46.655545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.062 [2024-12-09 05:29:46.655576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.062 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.062 [2024-12-09 05:29:46.669483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.062 [2024-12-09 05:29:46.669506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.062 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.678669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.678692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.692306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.692330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.707498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.707521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.721306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.721329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.731427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.731451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.745192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.745216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.760013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.760036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.767023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.767045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.780774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.780797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.792700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.792727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.805749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.805774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.814098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.814122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.823719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.823744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.837912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.837938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.847101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.847124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.861019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.861043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.871476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.871499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.885414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.885438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.894800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.894822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.908686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.908710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.920125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.920148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.926361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.926386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.322 [2024-12-09 05:29:46.934136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.322 [2024-12-09 05:29:46.934161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.322 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.582 [2024-12-09 05:29:46.943490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.582 [2024-12-09 05:29:46.943514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.582 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.582 [2024-12-09 05:29:46.957503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.582 [2024-12-09 05:29:46.957533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.582 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.582 [2024-12-09 05:29:46.966847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.582 [2024-12-09 05:29:46.966874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.582 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.582 [2024-12-09 05:29:46.981035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.582 [2024-12-09 05:29:46.981063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.582 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.582 [2024-12-09 05:29:46.993152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.582 [2024-12-09 05:29:46.993177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.582 2024/12/09 05:29:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.582 [2024-12-09 05:29:47.005557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.582 [2024-12-09 05:29:47.005589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.582 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.582 [2024-12-09 05:29:47.014686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.582 [2024-12-09 05:29:47.014710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.582 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.582 [2024-12-09 05:29:47.020827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.582 [2024-12-09 05:29:47.020851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.582 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.582 [2024-12-09 05:29:47.031576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.582 [2024-12-09 05:29:47.031602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.582 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.582 [2024-12-09 05:29:47.045335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.582 [2024-12-09 05:29:47.045358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.582 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.582 [2024-12-09 05:29:47.054623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.582 [2024-12-09 05:29:47.054647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.582 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.582 [2024-12-09 05:29:47.068434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.582 [2024-12-09 05:29:47.068458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.582 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.582 [2024-12-09 05:29:47.079147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.582 [2024-12-09 05:29:47.079172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.582 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.582 [2024-12-09 05:29:47.092922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.582 [2024-12-09 05:29:47.092946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.582 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.582 [2024-12-09 05:29:47.103415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.582 [2024-12-09 05:29:47.103439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.582 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.582 [2024-12-09 05:29:47.117428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.582 [2024-12-09 05:29:47.117452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.582 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.582 [2024-12-09 05:29:47.128104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.582 [2024-12-09 05:29:47.128128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.582 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.582 [2024-12-09 05:29:47.134571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.582 [2024-12-09 05:29:47.134593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.582 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.583 [2024-12-09 05:29:47.142473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.583 [2024-12-09 05:29:47.142498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.583 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.583 [2024-12-09 05:29:47.156765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.583 [2024-12-09 05:29:47.156788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.583 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.583 [2024-12-09 05:29:47.169467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.583 [2024-12-09 05:29:47.169490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.583 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.583 [2024-12-09 05:29:47.178741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.583 [2024-12-09 05:29:47.178765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.583 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.583 [2024-12-09 05:29:47.184785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.583 [2024-12-09 05:29:47.184807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.583 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.583 [2024-12-09 05:29:47.195277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.583 [2024-12-09 05:29:47.195297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.583 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.842 [2024-12-09 05:29:47.201634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.842 [2024-12-09 05:29:47.201656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.842 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.842 [2024-12-09 05:29:47.212050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.842 [2024-12-09 05:29:47.212071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.842 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.842 [2024-12-09 05:29:47.218634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.842 [2024-12-09 05:29:47.218656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.842 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.842 [2024-12-09 05:29:47.231957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.842 [2024-12-09 05:29:47.231979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.842 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.842 [2024-12-09 05:29:47.238639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.842 [2024-12-09 05:29:47.238662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.842 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.842 [2024-12-09 05:29:47.247342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.842 [2024-12-09 05:29:47.247367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.842 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.842 [2024-12-09 05:29:47.261154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.842 [2024-12-09 05:29:47.261177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.842 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.842 [2024-12-09 05:29:47.272092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.842 [2024-12-09 05:29:47.272115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.842 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.842 [2024-12-09 05:29:47.278461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.842 [2024-12-09 05:29:47.278486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.842 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.842 [2024-12-09 05:29:47.288080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.842 [2024-12-09 05:29:47.288103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.842 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.842 [2024-12-09 05:29:47.294253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.842 [2024-12-09 05:29:47.294277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.842 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.842 [2024-12-09 05:29:47.303706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.842 [2024-12-09 05:29:47.303729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.842 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.842 [2024-12-09 05:29:47.317437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.842 [2024-12-09 05:29:47.317462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.842 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.842 [2024-12-09 05:29:47.331186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.842 [2024-12-09 05:29:47.331301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.842 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.842 [2024-12-09 05:29:47.345332] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.842 [2024-12-09 05:29:47.345355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.842 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.842 [2024-12-09 05:29:47.355288] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.842 [2024-12-09 05:29:47.355312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.843 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.843 [2024-12-09 05:29:47.368930] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.843 [2024-12-09 05:29:47.368952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.843 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.843 [2024-12-09 05:29:47.380817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.843 [2024-12-09 05:29:47.380836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.843 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.843 [2024-12-09 05:29:47.393430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.843 [2024-12-09 05:29:47.393452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.843 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.843 [2024-12-09 05:29:47.405277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.843 [2024-12-09 05:29:47.405300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.843 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.843 [2024-12-09 05:29:47.417066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.843 [2024-12-09 05:29:47.417087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.843 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.843 [2024-12-09 05:29:47.428482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.843 [2024-12-09 05:29:47.428503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.843 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.843 [2024-12-09 05:29:47.441590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.843 [2024-12-09 05:29:47.441690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.843 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:55.843 [2024-12-09 05:29:47.452218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:55.843 [2024-12-09 05:29:47.452307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:55.843 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.102 [2024-12-09 05:29:47.458522] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.102 [2024-12-09 05:29:47.458547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.102 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.102 [2024-12-09 05:29:47.466302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.102 [2024-12-09 05:29:47.466327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.102 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.102 [2024-12-09 05:29:47.473613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.102 [2024-12-09 05:29:47.473636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.102 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.102 [2024-12-09 05:29:47.483015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.102 [2024-12-09 05:29:47.483040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.102 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.102 [2024-12-09 05:29:47.489312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.102 [2024-12-09 05:29:47.489336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.102 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.102 17894.67 IOPS, 139.80 MiB/s [2024-12-09T05:29:47.719Z] [2024-12-09 05:29:47.499666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.102 [2024-12-09 05:29:47.499692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.102 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.102 [2024-12-09 05:29:47.513591] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.102 [2024-12-09 05:29:47.513614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.103 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.103 [2024-12-09 05:29:47.522129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.103 [2024-12-09 05:29:47.522153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.103 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.103 [2024-12-09 05:29:47.532208] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.103 [2024-12-09 05:29:47.532232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.103 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.103 [2024-12-09 05:29:47.545463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.103 [2024-12-09 05:29:47.545488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.103 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.103 [2024-12-09 05:29:47.554536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.103 [2024-12-09 05:29:47.554568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.103 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.103 [2024-12-09 05:29:47.568290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.103 [2024-12-09 05:29:47.568314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.103 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.103 [2024-12-09 05:29:47.580314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.103 [2024-12-09 05:29:47.580338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.103 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.103 [2024-12-09 05:29:47.593688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.103 [2024-12-09 05:29:47.593710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.103 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.103 [2024-12-09 05:29:47.603325] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.103 [2024-12-09 05:29:47.603350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.103 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.103 [2024-12-09 05:29:47.617047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.103 [2024-12-09 05:29:47.617069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.103 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.103 [2024-12-09 05:29:47.627456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.103 [2024-12-09 05:29:47.627480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.103 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.103 [2024-12-09 05:29:47.641164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.103 [2024-12-09 05:29:47.641190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.103 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.103 [2024-12-09 05:29:47.651413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.103 [2024-12-09 05:29:47.651437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.103 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.103 [2024-12-09 05:29:47.664979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.103 [2024-12-09 05:29:47.665001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.103 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.103 [2024-12-09 05:29:47.675572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.103 [2024-12-09 05:29:47.675595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.103 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.103 [2024-12-09 05:29:47.689469] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.103 [2024-12-09 05:29:47.689495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.103 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.103 [2024-12-09 05:29:47.700034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.103 [2024-12-09 05:29:47.700058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.103 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.103 [2024-12-09 05:29:47.713254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.103 [2024-12-09 05:29:47.713278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.103 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.363 [2024-12-09 05:29:47.723279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.363 [2024-12-09 05:29:47.723302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.363 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.363 [2024-12-09 05:29:47.737287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.363 [2024-12-09 05:29:47.737311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.363 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.363 [2024-12-09 05:29:47.747511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.363 [2024-12-09 05:29:47.747537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.363 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.363 [2024-12-09 05:29:47.761639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.363 [2024-12-09 05:29:47.761663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.363 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.363 [2024-12-09 05:29:47.770913] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.363 [2024-12-09 05:29:47.770936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.363 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.363 [2024-12-09 05:29:47.784670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.363 [2024-12-09 05:29:47.784693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.363 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.363 [2024-12-09 05:29:47.794936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.363 [2024-12-09 05:29:47.794959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.363 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.363 [2024-12-09 05:29:47.808784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.363 [2024-12-09 05:29:47.808805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.363 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.363 [2024-12-09 05:29:47.819826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.363 [2024-12-09 05:29:47.819850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.363 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.363 [2024-12-09 05:29:47.833685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.363 [2024-12-09 05:29:47.833709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.363 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.363 [2024-12-09 05:29:47.842951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.363 [2024-12-09 05:29:47.842975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.363 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.363 [2024-12-09 05:29:47.856851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.363 [2024-12-09 05:29:47.856874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.363 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.363 [2024-12-09 05:29:47.867312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.363 [2024-12-09 05:29:47.867337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.363 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.363 [2024-12-09 05:29:47.881086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.363 [2024-12-09 05:29:47.881109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.363 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.363 [2024-12-09 05:29:47.891497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.363 [2024-12-09 05:29:47.891521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.363 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.364 [2024-12-09 05:29:47.905357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.364 [2024-12-09 05:29:47.905382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.364 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.364 [2024-12-09 05:29:47.914681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.364 [2024-12-09 05:29:47.914705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.364 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.364 [2024-12-09 05:29:47.928253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.364 [2024-12-09 05:29:47.928277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.364 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.364 [2024-12-09 05:29:47.938706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.364 [2024-12-09 05:29:47.938731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.364 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.364 [2024-12-09 05:29:47.952271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.364 [2024-12-09 05:29:47.952295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.364 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.364 [2024-12-09 05:29:47.963636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.364 [2024-12-09 05:29:47.963660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.364 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.364 [2024-12-09 05:29:47.977271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.364 [2024-12-09 05:29:47.977294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.623 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.623 [2024-12-09 05:29:47.987340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.623 [2024-12-09 05:29:47.987365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.623 2024/12/09 05:29:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.623 [2024-12-09 05:29:48.001473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.623 [2024-12-09 05:29:48.001498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.623 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.623 [2024-12-09 05:29:48.010855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.623 [2024-12-09 05:29:48.010878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.623 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.623 [2024-12-09 05:29:48.024897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.623 [2024-12-09 05:29:48.024921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.623 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.623 [2024-12-09 05:29:48.035319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.623 [2024-12-09 05:29:48.035343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.623 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.623 [2024-12-09 05:29:48.049070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.623 [2024-12-09 05:29:48.049094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.623 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.623 [2024-12-09 05:29:48.058588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.623 [2024-12-09 05:29:48.058612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.623 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.623 [2024-12-09 05:29:48.064924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.623 [2024-12-09 05:29:48.064948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.623 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.623 [2024-12-09 05:29:48.074459] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.623 [2024-12-09 05:29:48.074484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.623 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.623 [2024-12-09 05:29:48.080695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.623 [2024-12-09 05:29:48.080727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.623 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.623 [2024-12-09 05:29:48.091259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.623 [2024-12-09 05:29:48.091283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.623 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.623 [2024-12-09 05:29:48.104938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.623 [2024-12-09 05:29:48.104962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.623 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.623 [2024-12-09 05:29:48.114268] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.624 [2024-12-09 05:29:48.114293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.624 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.624 [2024-12-09 05:29:48.127938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.624 [2024-12-09 05:29:48.127962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.624 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.624 [2024-12-09 05:29:48.134701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.624 [2024-12-09 05:29:48.134724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.624 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.624 [2024-12-09 05:29:48.148613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.624 [2024-12-09 05:29:48.148637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.624 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.624 [2024-12-09 05:29:48.161220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.624 [2024-12-09 05:29:48.161245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.624 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.624 [2024-12-09 05:29:48.173140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.624 [2024-12-09 05:29:48.173165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.624 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.624 [2024-12-09 05:29:48.185740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.624 [2024-12-09 05:29:48.185763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.624 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.624 [2024-12-09 05:29:48.194848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.624 [2024-12-09 05:29:48.194871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.624 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.624 [2024-12-09 05:29:48.208575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.624 [2024-12-09 05:29:48.208596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.624 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.624 [2024-12-09 05:29:48.219357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.624 [2024-12-09 05:29:48.219378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.624 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.624 [2024-12-09 05:29:48.225669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.624 [2024-12-09 05:29:48.225691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.624 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.883 [2024-12-09 05:29:48.239272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.883 [2024-12-09 05:29:48.239296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.883 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.883 [2024-12-09 05:29:48.252924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.883 [2024-12-09 05:29:48.252946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.883 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.883 [2024-12-09 05:29:48.265460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.883 [2024-12-09 05:29:48.265485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.883 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.883 [2024-12-09 05:29:48.275879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.883 [2024-12-09 05:29:48.275903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 [2024-12-09 05:29:48.282033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.282057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 [2024-12-09 05:29:48.291899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.291922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 [2024-12-09 05:29:48.298064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.298088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 [2024-12-09 05:29:48.307238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.307262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 [2024-12-09 05:29:48.320985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.321010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 [2024-12-09 05:29:48.331365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.331389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 [2024-12-09 05:29:48.345188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.345212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 [2024-12-09 05:29:48.355534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.355559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 [2024-12-09 05:29:48.369263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.369288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 [2024-12-09 05:29:48.378684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.378709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 [2024-12-09 05:29:48.384997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.385020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 [2024-12-09 05:29:48.395356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.395379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 [2024-12-09 05:29:48.409185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.409208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 [2024-12-09 05:29:48.418529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.418554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 [2024-12-09 05:29:48.424728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.424750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 [2024-12-09 05:29:48.434801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.434824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 [2024-12-09 05:29:48.441107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.441131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 [2024-12-09 05:29:48.450420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.450443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 [2024-12-09 05:29:48.456739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.456762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 [2024-12-09 05:29:48.466856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.466879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 [2024-12-09 05:29:48.473098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.473122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 [2024-12-09 05:29:48.482592] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.482616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:56.884 17900.00 IOPS, 139.84 MiB/s [2024-12-09T05:29:48.501Z] [2024-12-09 05:29:48.496077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:56.884 [2024-12-09 05:29:48.496100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:56.884 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.144 [2024-12-09 05:29:48.502840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.144 [2024-12-09 05:29:48.502864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.144 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.144 [2024-12-09 05:29:48.516053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.144 [2024-12-09 05:29:48.516077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.144 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.144 [2024-12-09 05:29:48.529388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.144 [2024-12-09 05:29:48.529412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.144 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.144 [2024-12-09 05:29:48.538555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.144 [2024-12-09 05:29:48.538588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.144 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.144 [2024-12-09 05:29:48.552054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.144 [2024-12-09 05:29:48.552078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.144 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.144 [2024-12-09 05:29:48.558377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.144 [2024-12-09 05:29:48.558402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.144 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.144 [2024-12-09 05:29:48.567477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.144 [2024-12-09 05:29:48.567502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.144 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.144 [2024-12-09 05:29:48.581047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.144 [2024-12-09 05:29:48.581070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.144 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.145 [2024-12-09 05:29:48.592284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.145 [2024-12-09 05:29:48.592307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.145 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.145 [2024-12-09 05:29:48.605468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.145 [2024-12-09 05:29:48.605493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.145 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.145 [2024-12-09 05:29:48.615408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.145 [2024-12-09 05:29:48.615432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.145 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.145 [2024-12-09 05:29:48.629304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.145 [2024-12-09 05:29:48.629330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.145 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.145 [2024-12-09 05:29:48.638602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.145 [2024-12-09 05:29:48.638627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.145 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.145 [2024-12-09 05:29:48.652661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.145 [2024-12-09 05:29:48.652685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.145 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.145 [2024-12-09 05:29:48.663174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.145 [2024-12-09 05:29:48.663199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.145 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.145 [2024-12-09 05:29:48.677131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.145 [2024-12-09 05:29:48.677156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.145 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.145 [2024-12-09 05:29:48.688696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.145 [2024-12-09 05:29:48.688719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.145 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.145 [2024-12-09 05:29:48.701150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.145 [2024-12-09 05:29:48.701174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.145 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.145 [2024-12-09 05:29:48.712845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.145 [2024-12-09 05:29:48.712869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.145 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.145 [2024-12-09 05:29:48.727653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.145 [2024-12-09 05:29:48.727677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.145 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.145 [2024-12-09 05:29:48.741340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.145 [2024-12-09 05:29:48.741364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.145 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.145 [2024-12-09 05:29:48.751863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.145 [2024-12-09 05:29:48.751887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.145 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.412 [2024-12-09 05:29:48.765112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.412 [2024-12-09 05:29:48.765136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.412 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.412 [2024-12-09 05:29:48.775882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.412 [2024-12-09 05:29:48.775908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.412 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.412 [2024-12-09 05:29:48.782029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.412 [2024-12-09 05:29:48.782054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.413 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.413 [2024-12-09 05:29:48.792014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.413 [2024-12-09 05:29:48.792039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.413 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.413 [2024-12-09 05:29:48.798267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.413 [2024-12-09 05:29:48.798290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.413 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.413 [2024-12-09 05:29:48.811999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.413 [2024-12-09 05:29:48.812022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.413 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.413 [2024-12-09 05:29:48.818273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.413 [2024-12-09 05:29:48.818298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.413 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.413 [2024-12-09 05:29:48.827162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.413 [2024-12-09 05:29:48.827186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.413 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.413 [2024-12-09 05:29:48.840649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.413 [2024-12-09 05:29:48.840671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.413 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.413 [2024-12-09 05:29:48.851796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.413 [2024-12-09 05:29:48.851820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.413 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.413 [2024-12-09 05:29:48.865683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.413 [2024-12-09 05:29:48.865708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.413 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.413 [2024-12-09 05:29:48.875042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.413 [2024-12-09 05:29:48.875065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.413 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.413 [2024-12-09 05:29:48.888699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.413 [2024-12-09 05:29:48.888722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.413 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.413 [2024-12-09 05:29:48.899570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.413 [2024-12-09 05:29:48.899593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.413 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.413 [2024-12-09 05:29:48.913454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.413 [2024-12-09 05:29:48.913478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.413 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.413 [2024-12-09 05:29:48.927575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.413 [2024-12-09 05:29:48.927600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.413 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.413 [2024-12-09 05:29:48.941541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.413 [2024-12-09 05:29:48.941574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.413 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.413 [2024-12-09 05:29:48.950896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.413 [2024-12-09 05:29:48.950919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.413 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.413 [2024-12-09 05:29:48.964694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.413 [2024-12-09 05:29:48.964717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.413 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.413 [2024-12-09 05:29:48.975985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.413 [2024-12-09 05:29:48.976008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.413 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.413 [2024-12-09 05:29:48.982188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.413 [2024-12-09 05:29:48.982212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.413 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.413 [2024-12-09 05:29:48.995260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.413 [2024-12-09 05:29:48.995285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.413 2024/12/09 05:29:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.413 [2024-12-09 05:29:49.009093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.413 [2024-12-09 05:29:49.009115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.413 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.413 [2024-12-09 05:29:49.018491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.413 [2024-12-09 05:29:49.018515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.413 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.680 [2024-12-09 05:29:49.032447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.680 [2024-12-09 05:29:49.032469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.680 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.680 [2024-12-09 05:29:49.042814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.680 [2024-12-09 05:29:49.042838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.680 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.680 [2024-12-09 05:29:49.056472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.680 [2024-12-09 05:29:49.056494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.680 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.680 [2024-12-09 05:29:49.068726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.680 [2024-12-09 05:29:49.068748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.680 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.680 [2024-12-09 05:29:49.081235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.680 [2024-12-09 05:29:49.081259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.680 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.680 [2024-12-09 05:29:49.090501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.680 [2024-12-09 05:29:49.090526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.680 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.680 [2024-12-09 05:29:49.104096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.680 [2024-12-09 05:29:49.104120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.680 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.680 [2024-12-09 05:29:49.111143] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.680 [2024-12-09 05:29:49.111169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.680 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.680 [2024-12-09 05:29:49.125004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.680 [2024-12-09 05:29:49.125028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.680 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.680 [2024-12-09 05:29:49.135588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.680 [2024-12-09 05:29:49.135612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.680 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.680 [2024-12-09 05:29:49.149592] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.680 [2024-12-09 05:29:49.149616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.680 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.680 [2024-12-09 05:29:49.158803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.680 [2024-12-09 05:29:49.158826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.680 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.680 [2024-12-09 05:29:49.172772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.680 [2024-12-09 05:29:49.172796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.680 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.680 [2024-12-09 05:29:49.187748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.680 [2024-12-09 05:29:49.187773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.680 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.680 [2024-12-09 05:29:49.201118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.680 [2024-12-09 05:29:49.201142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.680 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.680 [2024-12-09 05:29:49.212473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.680 [2024-12-09 05:29:49.212496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.680 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.680 [2024-12-09 05:29:49.225647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.680 [2024-12-09 05:29:49.225668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.680 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.680 [2024-12-09 05:29:49.235931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.680 [2024-12-09 05:29:49.235952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.680 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.680 [2024-12-09 05:29:49.242315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.680 [2024-12-09 05:29:49.242337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.680 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.680 [2024-12-09 05:29:49.256145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.680 [2024-12-09 05:29:49.256169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.681 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.681 [2024-12-09 05:29:49.263068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.681 [2024-12-09 05:29:49.263092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.681 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.681 [2024-12-09 05:29:49.276409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.681 [2024-12-09 05:29:49.276431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.681 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.681 [2024-12-09 05:29:49.287095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.681 [2024-12-09 05:29:49.287118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.681 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.939 [2024-12-09 05:29:49.300833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.939 [2024-12-09 05:29:49.300854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.939 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.939 [2024-12-09 05:29:49.311362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.939 [2024-12-09 05:29:49.311387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.939 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.939 [2024-12-09 05:29:49.325111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.939 [2024-12-09 05:29:49.325134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.939 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.939 [2024-12-09 05:29:49.334905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.939 [2024-12-09 05:29:49.334929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.939 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.939 [2024-12-09 05:29:49.348713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.939 [2024-12-09 05:29:49.348736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.939 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.939 [2024-12-09 05:29:49.359128] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.939 [2024-12-09 05:29:49.359152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.939 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.939 [2024-12-09 05:29:49.372886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.939 [2024-12-09 05:29:49.372909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.939 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.939 [2024-12-09 05:29:49.383291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.939 [2024-12-09 05:29:49.383315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.939 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.939 [2024-12-09 05:29:49.397048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.939 [2024-12-09 05:29:49.397072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.939 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.939 [2024-12-09 05:29:49.407353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.939 [2024-12-09 05:29:49.407378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.939 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.939 [2024-12-09 05:29:49.421246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.939 [2024-12-09 05:29:49.421269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.939 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.939 [2024-12-09 05:29:49.435763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.939 [2024-12-09 05:29:49.435786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.939 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.939 [2024-12-09 05:29:49.449352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.939 [2024-12-09 05:29:49.449374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.939 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.939 [2024-12-09 05:29:49.459302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.939 [2024-12-09 05:29:49.459327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.939 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.939 [2024-12-09 05:29:49.473046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.939 [2024-12-09 05:29:49.473070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.939 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.939 [2024-12-09 05:29:49.482267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.939 [2024-12-09 05:29:49.482290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.940 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.940 [2024-12-09 05:29:49.488712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.940 [2024-12-09 05:29:49.488736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.940 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.940 17912.20 IOPS, 139.94 MiB/s [2024-12-09T05:29:49.557Z] [2024-12-09 05:29:49.498843] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.940 [2024-12-09 05:29:49.498867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.940 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.940 01:34:57.940 Latency(us) 01:34:57.940 [2024-12-09T05:29:49.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:34:57.940 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 01:34:57.940 Nvme1n1 : 5.01 17914.70 139.96 0.00 0.00 7137.74 1840.05 13006.38 01:34:57.940 [2024-12-09T05:29:49.557Z] =================================================================================================================== 01:34:57.940 [2024-12-09T05:29:49.557Z] Total : 17914.70 139.96 0.00 0.00 7137.74 1840.05 13006.38 01:34:57.940 [2024-12-09 05:29:49.503992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.940 [2024-12-09 05:29:49.504013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.940 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.940 [2024-12-09 05:29:49.511993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.940 [2024-12-09 05:29:49.512013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.940 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.940 [2024-12-09 05:29:49.519988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.940 [2024-12-09 05:29:49.520005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.940 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.940 [2024-12-09 05:29:49.527990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.940 [2024-12-09 05:29:49.528010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.940 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.940 [2024-12-09 05:29:49.535990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.940 [2024-12-09 05:29:49.536008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.940 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.940 [2024-12-09 05:29:49.543987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.940 [2024-12-09 05:29:49.544007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:57.940 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:57.940 [2024-12-09 05:29:49.551987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:57.940 [2024-12-09 05:29:49.552005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:58.199 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:58.199 [2024-12-09 05:29:49.559985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:58.199 [2024-12-09 05:29:49.560001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:58.199 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:58.199 [2024-12-09 05:29:49.567985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:58.199 [2024-12-09 05:29:49.568001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:58.199 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:58.199 [2024-12-09 05:29:49.575985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:58.199 [2024-12-09 05:29:49.576002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:58.199 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:58.199 [2024-12-09 05:29:49.583987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:58.199 [2024-12-09 05:29:49.584005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:58.199 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:58.199 [2024-12-09 05:29:49.591984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:58.199 [2024-12-09 05:29:49.592000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:58.199 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:58.199 [2024-12-09 05:29:49.599985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:58.199 [2024-12-09 05:29:49.600001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:58.199 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:58.199 [2024-12-09 05:29:49.607983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:58.199 [2024-12-09 05:29:49.607998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:58.199 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:58.199 [2024-12-09 05:29:49.615984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:58.199 [2024-12-09 05:29:49.615999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:58.199 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:58.199 [2024-12-09 05:29:49.623985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:58.199 [2024-12-09 05:29:49.624001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:58.199 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:58.199 [2024-12-09 05:29:49.635986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:34:58.199 [2024-12-09 05:29:49.636004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:34:58.199 2024/12/09 05:29:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:34:58.199 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (104389) - No such process 01:34:58.199 05:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 104389 01:34:58.199 05:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:34:58.199 05:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:58.199 05:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:34:58.199 05:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:58.199 05:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:34:58.199 05:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:58.199 05:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:34:58.199 delay0 01:34:58.199 05:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:58.199 05:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 01:34:58.199 05:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:34:58.199 05:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:34:58.199 05:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:34:58.199 05:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 01:34:58.457 [2024-12-09 05:29:49.825533] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 01:35:06.566 Initializing NVMe Controllers 01:35:06.566 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:35:06.566 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:35:06.566 Initialization complete. Launching workers. 01:35:06.566 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 215, failed: 42108 01:35:06.566 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 42182, failed to submit 141 01:35:06.566 success 42108, unsuccessful 74, failed 0 01:35:06.566 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 01:35:06.566 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 01:35:06.566 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 01:35:06.566 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 01:35:06.566 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:35:06.566 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 01:35:06.566 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 01:35:06.566 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:35:06.566 rmmod nvme_tcp 01:35:06.566 rmmod nvme_fabrics 01:35:06.566 rmmod nvme_keyring 01:35:06.566 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:35:06.566 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 01:35:06.566 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 01:35:06.566 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 104220 ']' 01:35:06.566 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 104220 01:35:06.566 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 104220 ']' 01:35:06.566 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 104220 01:35:06.566 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 01:35:06.566 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:35:06.566 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104220 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:35:06.566 killing process with pid 104220 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104220' 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 104220 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 104220 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 01:35:06.566 ************************************ 01:35:06.566 END TEST nvmf_zcopy 01:35:06.566 ************************************ 01:35:06.566 01:35:06.566 real 0m25.213s 01:35:06.566 user 0m40.801s 01:35:06.566 sys 0m6.023s 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:35:06.566 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:35:06.566 ************************************ 01:35:06.567 START TEST nvmf_nmic 01:35:06.567 ************************************ 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 01:35:06.567 * Looking for test storage... 01:35:06.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:35:06.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:35:06.567 --rc genhtml_branch_coverage=1 01:35:06.567 --rc genhtml_function_coverage=1 01:35:06.567 --rc genhtml_legend=1 01:35:06.567 --rc geninfo_all_blocks=1 01:35:06.567 --rc geninfo_unexecuted_blocks=1 01:35:06.567 01:35:06.567 ' 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:35:06.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:35:06.567 --rc genhtml_branch_coverage=1 01:35:06.567 --rc genhtml_function_coverage=1 01:35:06.567 --rc genhtml_legend=1 01:35:06.567 --rc geninfo_all_blocks=1 01:35:06.567 --rc geninfo_unexecuted_blocks=1 01:35:06.567 01:35:06.567 ' 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:35:06.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:35:06.567 --rc genhtml_branch_coverage=1 01:35:06.567 --rc genhtml_function_coverage=1 01:35:06.567 --rc genhtml_legend=1 01:35:06.567 --rc geninfo_all_blocks=1 01:35:06.567 --rc geninfo_unexecuted_blocks=1 01:35:06.567 01:35:06.567 ' 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:35:06.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:35:06.567 --rc genhtml_branch_coverage=1 01:35:06.567 --rc genhtml_function_coverage=1 01:35:06.567 --rc genhtml_legend=1 01:35:06.567 --rc geninfo_all_blocks=1 01:35:06.567 --rc geninfo_unexecuted_blocks=1 01:35:06.567 01:35:06.567 ' 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:35:06.567 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:35:06.568 Cannot find device "nvmf_init_br" 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:35:06.568 Cannot find device "nvmf_init_br2" 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:35:06.568 Cannot find device "nvmf_tgt_br" 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # true 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:35:06.568 Cannot find device "nvmf_tgt_br2" 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # true 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:35:06.568 Cannot find device "nvmf_init_br" 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:35:06.568 Cannot find device "nvmf_init_br2" 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:35:06.568 Cannot find device "nvmf_tgt_br" 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # true 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:35:06.568 Cannot find device "nvmf_tgt_br2" 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # true 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:35:06.568 Cannot find device "nvmf_br" 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # true 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:35:06.568 Cannot find device "nvmf_init_if" 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # true 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:35:06.568 Cannot find device "nvmf_init_if2" 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # true 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:35:06.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # true 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:35:06.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # true 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:35:06.568 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:35:06.569 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:35:06.569 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 01:35:06.569 01:35:06.569 --- 10.0.0.3 ping statistics --- 01:35:06.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:35:06.569 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:35:06.569 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:35:06.569 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.074 ms 01:35:06.569 01:35:06.569 --- 10.0.0.4 ping statistics --- 01:35:06.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:35:06.569 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:35:06.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:35:06.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 01:35:06.569 01:35:06.569 --- 10.0.0.1 ping statistics --- 01:35:06.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:35:06.569 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:35:06.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:35:06.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 01:35:06.569 01:35:06.569 --- 10.0.0.2 ping statistics --- 01:35:06.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:35:06.569 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=104772 01:35:06.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 104772 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 104772 ']' 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 01:35:06.569 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:35:06.569 [2024-12-09 05:29:57.953021] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:35:06.569 [2024-12-09 05:29:57.953793] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:35:06.569 [2024-12-09 05:29:57.953833] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:35:06.569 [2024-12-09 05:29:58.099349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:35:06.569 [2024-12-09 05:29:58.135601] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:35:06.569 [2024-12-09 05:29:58.135667] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:35:06.569 [2024-12-09 05:29:58.135674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:35:06.569 [2024-12-09 05:29:58.135679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:35:06.569 [2024-12-09 05:29:58.135683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:35:06.569 [2024-12-09 05:29:58.136469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:35:06.569 [2024-12-09 05:29:58.136743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:35:06.569 [2024-12-09 05:29:58.137670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:35:06.569 [2024-12-09 05:29:58.137844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:35:06.827 [2024-12-09 05:29:58.191695] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:35:06.827 [2024-12-09 05:29:58.192585] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:35:06.827 [2024-12-09 05:29:58.193178] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:35:06.827 [2024-12-09 05:29:58.193262] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:35:06.827 [2024-12-09 05:29:58.193442] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:35:07.394 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:35:07.394 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 01:35:07.394 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:35:07.394 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 01:35:07.394 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:35:07.394 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:35:07.394 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:35:07.394 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:07.394 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:35:07.394 [2024-12-09 05:29:58.878361] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:35:07.394 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:07.394 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:35:07.394 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:07.394 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:35:07.394 Malloc0 01:35:07.394 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:07.394 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:35:07.395 [2024-12-09 05:29:58.938453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 01:35:07.395 test case1: single bdev can't be used in multiple subsystems 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:35:07.395 [2024-12-09 05:29:58.962202] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 01:35:07.395 [2024-12-09 05:29:58.962228] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 01:35:07.395 [2024-12-09 05:29:58.962235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:35:07.395 2024/12/09 05:29:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:35:07.395 request: 01:35:07.395 { 01:35:07.395 "method": "nvmf_subsystem_add_ns", 01:35:07.395 "params": { 01:35:07.395 "nqn": "nqn.2016-06.io.spdk:cnode2", 01:35:07.395 "namespace": { 01:35:07.395 "bdev_name": "Malloc0", 01:35:07.395 "no_auto_visible": false, 01:35:07.395 "hide_metadata": false 01:35:07.395 } 01:35:07.395 } 01:35:07.395 } 01:35:07.395 Got JSON-RPC error response 01:35:07.395 GoRPCClient: error on JSON-RPC call 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 01:35:07.395 Adding namespace failed - expected result. 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 01:35:07.395 test case2: host connect to nvmf target in multiple paths 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:35:07.395 [2024-12-09 05:29:58.974276] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:07.395 05:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:35:07.653 05:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 01:35:07.653 05:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 01:35:07.653 05:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 01:35:07.653 05:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:35:07.653 05:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:35:07.653 05:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 01:35:09.561 05:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:35:09.561 05:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:35:09.561 05:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:35:09.561 05:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:35:09.561 05:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:35:09.561 05:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 01:35:09.561 05:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 01:35:09.561 [global] 01:35:09.561 thread=1 01:35:09.561 invalidate=1 01:35:09.561 rw=write 01:35:09.561 time_based=1 01:35:09.561 runtime=1 01:35:09.561 ioengine=libaio 01:35:09.561 direct=1 01:35:09.561 bs=4096 01:35:09.561 iodepth=1 01:35:09.561 norandommap=0 01:35:09.561 numjobs=1 01:35:09.561 01:35:09.561 verify_dump=1 01:35:09.561 verify_backlog=512 01:35:09.561 verify_state_save=0 01:35:09.561 do_verify=1 01:35:09.561 verify=crc32c-intel 01:35:09.561 [job0] 01:35:09.561 filename=/dev/nvme0n1 01:35:09.561 Could not set queue depth (nvme0n1) 01:35:09.819 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:35:09.819 fio-3.35 01:35:09.819 Starting 1 thread 01:35:11.191 01:35:11.191 job0: (groupid=0, jobs=1): err= 0: pid=104876: Mon Dec 9 05:30:02 2024 01:35:11.191 read: IOPS=4796, BW=18.7MiB/s (19.6MB/s)(18.7MiB/1000msec) 01:35:11.191 slat (nsec): min=6068, max=94195, avg=7478.81, stdev=4138.34 01:35:11.191 clat (usec): min=60, max=378, avg=103.70, stdev=15.63 01:35:11.191 lat (usec): min=89, max=385, avg=111.17, stdev=16.29 01:35:11.191 clat percentiles (usec): 01:35:11.191 | 1.00th=[ 88], 5.00th=[ 91], 10.00th=[ 92], 20.00th=[ 94], 01:35:11.191 | 30.00th=[ 96], 40.00th=[ 98], 50.00th=[ 99], 60.00th=[ 101], 01:35:11.191 | 70.00th=[ 108], 80.00th=[ 116], 90.00th=[ 121], 95.00th=[ 124], 01:35:11.191 | 99.00th=[ 133], 99.50th=[ 176], 99.90th=[ 289], 99.95th=[ 375], 01:35:11.191 | 99.99th=[ 379] 01:35:11.191 write: IOPS=5120, BW=20.0MiB/s (21.0MB/s)(20.0MiB/1000msec); 0 zone resets 01:35:11.191 slat (usec): min=9, max=114, avg=12.54, stdev= 6.91 01:35:11.191 clat (usec): min=55, max=401, avg=76.85, stdev=14.20 01:35:11.191 lat (usec): min=69, max=429, avg=89.39, stdev=17.94 01:35:11.191 clat percentiles (usec): 01:35:11.191 | 1.00th=[ 63], 5.00th=[ 65], 10.00th=[ 67], 20.00th=[ 69], 01:35:11.191 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 76], 01:35:11.191 | 70.00th=[ 82], 80.00th=[ 87], 90.00th=[ 91], 95.00th=[ 97], 01:35:11.191 | 99.00th=[ 106], 99.50th=[ 112], 99.90th=[ 247], 99.95th=[ 355], 01:35:11.191 | 99.99th=[ 400] 01:35:11.191 bw ( KiB/s): min=20480, max=20480, per=100.00%, avg=20480.00, stdev= 0.00, samples=1 01:35:11.191 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 01:35:11.191 lat (usec) : 100=76.94%, 250=22.91%, 500=0.15% 01:35:11.191 cpu : usr=1.50%, sys=7.80%, ctx=9926, majf=0, minf=5 01:35:11.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:35:11.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:11.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:11.191 issued rwts: total=4796,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:11.191 latency : target=0, window=0, percentile=100.00%, depth=1 01:35:11.191 01:35:11.191 Run status group 0 (all jobs): 01:35:11.191 READ: bw=18.7MiB/s (19.6MB/s), 18.7MiB/s-18.7MiB/s (19.6MB/s-19.6MB/s), io=18.7MiB (19.6MB), run=1000-1000msec 01:35:11.191 WRITE: bw=20.0MiB/s (21.0MB/s), 20.0MiB/s-20.0MiB/s (21.0MB/s-21.0MB/s), io=20.0MiB (21.0MB), run=1000-1000msec 01:35:11.191 01:35:11.191 Disk stats (read/write): 01:35:11.191 nvme0n1: ios=4275/4608, merge=0/0, ticks=459/374, in_queue=833, util=90.98% 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:35:11.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:35:11.191 rmmod nvme_tcp 01:35:11.191 rmmod nvme_fabrics 01:35:11.191 rmmod nvme_keyring 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 104772 ']' 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 104772 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 104772 ']' 01:35:11.191 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 104772 01:35:11.192 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 01:35:11.192 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:35:11.192 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104772 01:35:11.192 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:35:11.192 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:35:11.192 killing process with pid 104772 01:35:11.192 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104772' 01:35:11.192 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 104772 01:35:11.192 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 104772 01:35:11.449 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:35:11.449 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:35:11.449 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:35:11.449 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 01:35:11.449 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:35:11.449 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 01:35:11.449 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 01:35:11.449 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:35:11.449 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:35:11.449 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:35:11.449 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:35:11.449 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:35:11.449 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:35:11.449 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:35:11.449 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:35:11.449 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:35:11.449 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:35:11.449 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:35:11.449 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:35:11.449 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:35:11.449 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:35:11.449 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:35:11.449 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 01:35:11.449 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:35:11.449 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:35:11.449 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 01:35:11.709 01:35:11.709 real 0m5.663s 01:35:11.709 user 0m14.895s 01:35:11.709 sys 0m1.696s 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 01:35:11.709 ************************************ 01:35:11.709 END TEST nvmf_nmic 01:35:11.709 ************************************ 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:35:11.709 ************************************ 01:35:11.709 START TEST nvmf_fio_target 01:35:11.709 ************************************ 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 01:35:11.709 * Looking for test storage... 01:35:11.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:35:11.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:35:11.709 --rc genhtml_branch_coverage=1 01:35:11.709 --rc genhtml_function_coverage=1 01:35:11.709 --rc genhtml_legend=1 01:35:11.709 --rc geninfo_all_blocks=1 01:35:11.709 --rc geninfo_unexecuted_blocks=1 01:35:11.709 01:35:11.709 ' 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:35:11.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:35:11.709 --rc genhtml_branch_coverage=1 01:35:11.709 --rc genhtml_function_coverage=1 01:35:11.709 --rc genhtml_legend=1 01:35:11.709 --rc geninfo_all_blocks=1 01:35:11.709 --rc geninfo_unexecuted_blocks=1 01:35:11.709 01:35:11.709 ' 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:35:11.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:35:11.709 --rc genhtml_branch_coverage=1 01:35:11.709 --rc genhtml_function_coverage=1 01:35:11.709 --rc genhtml_legend=1 01:35:11.709 --rc geninfo_all_blocks=1 01:35:11.709 --rc geninfo_unexecuted_blocks=1 01:35:11.709 01:35:11.709 ' 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:35:11.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:35:11.709 --rc genhtml_branch_coverage=1 01:35:11.709 --rc genhtml_function_coverage=1 01:35:11.709 --rc genhtml_legend=1 01:35:11.709 --rc geninfo_all_blocks=1 01:35:11.709 --rc geninfo_unexecuted_blocks=1 01:35:11.709 01:35:11.709 ' 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:35:11.709 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:35:11.710 Cannot find device "nvmf_init_br" 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:35:11.710 Cannot find device "nvmf_init_br2" 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:35:11.710 Cannot find device "nvmf_tgt_br" 01:35:11.710 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # true 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:35:11.968 Cannot find device "nvmf_tgt_br2" 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # true 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:35:11.968 Cannot find device "nvmf_init_br" 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:35:11.968 Cannot find device "nvmf_init_br2" 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:35:11.968 Cannot find device "nvmf_tgt_br" 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # true 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:35:11.968 Cannot find device "nvmf_tgt_br2" 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # true 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:35:11.968 Cannot find device "nvmf_br" 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # true 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:35:11.968 Cannot find device "nvmf_init_if" 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # true 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:35:11.968 Cannot find device "nvmf_init_if2" 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # true 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:35:11.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # true 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:35:11.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # true 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:35:11.968 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:35:11.968 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 01:35:11.968 01:35:11.968 --- 10.0.0.3 ping statistics --- 01:35:11.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:35:11.968 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 01:35:11.968 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:35:11.969 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:35:11.969 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 01:35:11.969 01:35:11.969 --- 10.0.0.4 ping statistics --- 01:35:11.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:35:11.969 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 01:35:11.969 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:35:12.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:35:12.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 01:35:12.227 01:35:12.227 --- 10.0.0.1 ping statistics --- 01:35:12.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:35:12.227 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:35:12.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:35:12.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 01:35:12.227 01:35:12.227 --- 10.0.0.2 ping statistics --- 01:35:12.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:35:12.227 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=105109 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 105109 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 105109 ']' 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:35:12.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:35:12.227 05:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 01:35:12.227 [2024-12-09 05:30:03.641340] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:35:12.227 [2024-12-09 05:30:03.641977] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:35:12.227 [2024-12-09 05:30:03.642020] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:35:12.227 [2024-12-09 05:30:03.777939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:35:12.227 [2024-12-09 05:30:03.814202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:35:12.227 [2024-12-09 05:30:03.814242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:35:12.227 [2024-12-09 05:30:03.814249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:35:12.227 [2024-12-09 05:30:03.814254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:35:12.227 [2024-12-09 05:30:03.814259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:35:12.227 [2024-12-09 05:30:03.814896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:35:12.227 [2024-12-09 05:30:03.814993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:35:12.227 [2024-12-09 05:30:03.815528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:35:12.227 [2024-12-09 05:30:03.815531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:35:12.484 [2024-12-09 05:30:03.870716] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:35:12.484 [2024-12-09 05:30:03.871772] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:35:12.484 [2024-12-09 05:30:03.872076] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:35:12.484 [2024-12-09 05:30:03.872287] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:35:12.484 [2024-12-09 05:30:03.873224] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:35:13.050 05:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:35:13.050 05:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 01:35:13.050 05:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:35:13.050 05:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 01:35:13.050 05:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:35:13.050 05:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:35:13.050 05:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:35:13.307 [2024-12-09 05:30:04.692409] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:35:13.307 05:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:35:13.577 05:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 01:35:13.577 05:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:35:13.577 05:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 01:35:13.577 05:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:35:13.833 05:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 01:35:13.834 05:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:35:14.090 05:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 01:35:14.090 05:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 01:35:14.347 05:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:35:14.605 05:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 01:35:14.605 05:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:35:14.605 05:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 01:35:14.605 05:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:35:14.865 05:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 01:35:14.865 05:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 01:35:15.123 05:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:35:15.380 05:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 01:35:15.380 05:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:35:15.637 05:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 01:35:15.637 05:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:35:15.894 05:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:35:15.894 [2024-12-09 05:30:07.448260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:35:15.894 05:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 01:35:16.154 05:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 01:35:16.412 05:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:35:16.412 05:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 01:35:16.412 05:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 01:35:16.412 05:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:35:16.412 05:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 01:35:16.413 05:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 01:35:16.413 05:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 01:35:18.939 05:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:35:18.939 05:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:35:18.939 05:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:35:18.939 05:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 01:35:18.939 05:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:35:18.939 05:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 01:35:18.939 05:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 01:35:18.939 [global] 01:35:18.939 thread=1 01:35:18.939 invalidate=1 01:35:18.939 rw=write 01:35:18.939 time_based=1 01:35:18.939 runtime=1 01:35:18.939 ioengine=libaio 01:35:18.939 direct=1 01:35:18.939 bs=4096 01:35:18.939 iodepth=1 01:35:18.939 norandommap=0 01:35:18.939 numjobs=1 01:35:18.939 01:35:18.939 verify_dump=1 01:35:18.939 verify_backlog=512 01:35:18.939 verify_state_save=0 01:35:18.939 do_verify=1 01:35:18.939 verify=crc32c-intel 01:35:18.939 [job0] 01:35:18.939 filename=/dev/nvme0n1 01:35:18.939 [job1] 01:35:18.939 filename=/dev/nvme0n2 01:35:18.939 [job2] 01:35:18.939 filename=/dev/nvme0n3 01:35:18.939 [job3] 01:35:18.939 filename=/dev/nvme0n4 01:35:18.939 Could not set queue depth (nvme0n1) 01:35:18.939 Could not set queue depth (nvme0n2) 01:35:18.939 Could not set queue depth (nvme0n3) 01:35:18.939 Could not set queue depth (nvme0n4) 01:35:18.939 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:35:18.939 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:35:18.939 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:35:18.939 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:35:18.939 fio-3.35 01:35:18.939 Starting 4 threads 01:35:19.887 01:35:19.887 job0: (groupid=0, jobs=1): err= 0: pid=105391: Mon Dec 9 05:30:11 2024 01:35:19.887 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 01:35:19.887 slat (nsec): min=4766, max=67013, avg=7533.79, stdev=3331.71 01:35:19.887 clat (usec): min=109, max=1085, avg=198.79, stdev=53.75 01:35:19.887 lat (usec): min=115, max=1104, avg=206.33, stdev=54.10 01:35:19.887 clat percentiles (usec): 01:35:19.887 | 1.00th=[ 117], 5.00th=[ 123], 10.00th=[ 130], 20.00th=[ 188], 01:35:19.887 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 01:35:19.887 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 235], 01:35:19.887 | 99.00th=[ 388], 99.50th=[ 420], 99.90th=[ 766], 99.95th=[ 1037], 01:35:19.887 | 99.99th=[ 1090] 01:35:19.887 write: IOPS=2924, BW=11.4MiB/s (12.0MB/s)(11.4MiB/1001msec); 0 zone resets 01:35:19.887 slat (nsec): min=7350, max=96524, avg=10686.13, stdev=4458.84 01:35:19.887 clat (usec): min=76, max=1311, avg=148.92, stdev=36.14 01:35:19.887 lat (usec): min=89, max=1321, avg=159.60, stdev=36.15 01:35:19.887 clat percentiles (usec): 01:35:19.887 | 1.00th=[ 85], 5.00th=[ 90], 10.00th=[ 95], 20.00th=[ 137], 01:35:19.887 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 01:35:19.887 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 180], 01:35:19.887 | 99.00th=[ 206], 99.50th=[ 239], 99.90th=[ 351], 99.95th=[ 359], 01:35:19.887 | 99.99th=[ 1319] 01:35:19.887 bw ( KiB/s): min=12288, max=12288, per=20.59%, avg=12288.00, stdev= 0.00, samples=1 01:35:19.887 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 01:35:19.887 lat (usec) : 100=7.47%, 250=90.63%, 500=1.71%, 750=0.11%, 1000=0.02% 01:35:19.887 lat (msec) : 2=0.05% 01:35:19.887 cpu : usr=0.80%, sys=3.90%, ctx=5499, majf=0, minf=15 01:35:19.887 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:35:19.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:19.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:19.887 issued rwts: total=2560,2927,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:19.887 latency : target=0, window=0, percentile=100.00%, depth=1 01:35:19.887 job1: (groupid=0, jobs=1): err= 0: pid=105392: Mon Dec 9 05:30:11 2024 01:35:19.887 read: IOPS=4123, BW=16.1MiB/s (16.9MB/s)(16.1MiB/1001msec) 01:35:19.887 slat (nsec): min=4975, max=36820, avg=6884.99, stdev=1895.17 01:35:19.887 clat (usec): min=97, max=1471, avg=119.64, stdev=31.41 01:35:19.887 lat (usec): min=104, max=1485, avg=126.52, stdev=31.46 01:35:19.887 clat percentiles (usec): 01:35:19.887 | 1.00th=[ 101], 5.00th=[ 104], 10.00th=[ 106], 20.00th=[ 109], 01:35:19.887 | 30.00th=[ 111], 40.00th=[ 112], 50.00th=[ 114], 60.00th=[ 116], 01:35:19.887 | 70.00th=[ 119], 80.00th=[ 123], 90.00th=[ 133], 95.00th=[ 172], 01:35:19.887 | 99.00th=[ 231], 99.50th=[ 247], 99.90th=[ 318], 99.95th=[ 351], 01:35:19.887 | 99.99th=[ 1467] 01:35:19.887 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 01:35:19.887 slat (nsec): min=7980, max=89186, avg=11340.12, stdev=5068.78 01:35:19.887 clat (usec): min=67, max=379, avg=90.75, stdev=18.46 01:35:19.887 lat (usec): min=80, max=390, avg=102.09, stdev=19.53 01:35:19.887 clat percentiles (usec): 01:35:19.887 | 1.00th=[ 75], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 81], 01:35:19.887 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 86], 60.00th=[ 88], 01:35:19.887 | 70.00th=[ 91], 80.00th=[ 96], 90.00th=[ 113], 95.00th=[ 130], 01:35:19.887 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 293], 99.95th=[ 306], 01:35:19.887 | 99.99th=[ 379] 01:35:19.887 bw ( KiB/s): min=19616, max=19616, per=32.86%, avg=19616.00, stdev= 0.00, samples=1 01:35:19.887 iops : min= 4904, max= 4904, avg=4904.00, stdev= 0.00, samples=1 01:35:19.887 lat (usec) : 100=44.75%, 250=54.97%, 500=0.27% 01:35:19.887 lat (msec) : 2=0.01% 01:35:19.887 cpu : usr=1.30%, sys=6.40%, ctx=8736, majf=0, minf=5 01:35:19.887 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:35:19.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:19.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:19.887 issued rwts: total=4128,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:19.887 latency : target=0, window=0, percentile=100.00%, depth=1 01:35:19.887 job2: (groupid=0, jobs=1): err= 0: pid=105393: Mon Dec 9 05:30:11 2024 01:35:19.887 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 01:35:19.887 slat (nsec): min=5987, max=29921, avg=6785.60, stdev=783.68 01:35:19.887 clat (usec): min=89, max=1245, avg=108.26, stdev=19.29 01:35:19.887 lat (usec): min=95, max=1252, avg=115.05, stdev=19.29 01:35:19.887 clat percentiles (usec): 01:35:19.887 | 1.00th=[ 96], 5.00th=[ 99], 10.00th=[ 100], 20.00th=[ 102], 01:35:19.887 | 30.00th=[ 104], 40.00th=[ 105], 50.00th=[ 106], 60.00th=[ 109], 01:35:19.887 | 70.00th=[ 111], 80.00th=[ 113], 90.00th=[ 118], 95.00th=[ 124], 01:35:19.887 | 99.00th=[ 141], 99.50th=[ 147], 99.90th=[ 169], 99.95th=[ 247], 01:35:19.887 | 99.99th=[ 1254] 01:35:19.887 write: IOPS=4798, BW=18.7MiB/s (19.7MB/s)(18.8MiB/1001msec); 0 zone resets 01:35:19.887 slat (usec): min=9, max=149, avg=11.42, stdev= 4.93 01:35:19.887 clat (usec): min=66, max=463, avg=84.86, stdev=11.54 01:35:19.887 lat (usec): min=76, max=492, avg=96.28, stdev=13.59 01:35:19.887 clat percentiles (usec): 01:35:19.887 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 77], 20.00th=[ 79], 01:35:19.887 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 83], 60.00th=[ 85], 01:35:19.887 | 70.00th=[ 87], 80.00th=[ 90], 90.00th=[ 96], 95.00th=[ 102], 01:35:19.887 | 99.00th=[ 120], 99.50th=[ 126], 99.90th=[ 153], 99.95th=[ 281], 01:35:19.887 | 99.99th=[ 465] 01:35:19.887 bw ( KiB/s): min=20480, max=20480, per=34.31%, avg=20480.00, stdev= 0.00, samples=1 01:35:19.887 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 01:35:19.887 lat (usec) : 100=52.75%, 250=47.20%, 500=0.04% 01:35:19.887 lat (msec) : 2=0.01% 01:35:19.887 cpu : usr=1.30%, sys=6.70%, ctx=9411, majf=0, minf=19 01:35:19.887 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:35:19.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:19.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:19.887 issued rwts: total=4608,4803,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:19.887 latency : target=0, window=0, percentile=100.00%, depth=1 01:35:19.887 job3: (groupid=0, jobs=1): err= 0: pid=105394: Mon Dec 9 05:30:11 2024 01:35:19.887 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 01:35:19.887 slat (nsec): min=5151, max=36657, avg=7190.78, stdev=2066.92 01:35:19.887 clat (usec): min=119, max=688, avg=202.46, stdev=23.99 01:35:19.887 lat (usec): min=124, max=697, avg=209.65, stdev=24.28 01:35:19.887 clat percentiles (usec): 01:35:19.887 | 1.00th=[ 143], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 192], 01:35:19.887 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 202], 60.00th=[ 206], 01:35:19.887 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 231], 01:35:19.887 | 99.00th=[ 265], 99.50th=[ 297], 99.90th=[ 396], 99.95th=[ 490], 01:35:19.887 | 99.99th=[ 693] 01:35:19.887 write: IOPS=2597, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1001msec); 0 zone resets 01:35:19.887 slat (usec): min=8, max=148, avg=11.76, stdev= 5.67 01:35:19.887 clat (usec): min=86, max=7752, avg=164.75, stdev=213.58 01:35:19.887 lat (usec): min=95, max=7761, avg=176.51, stdev=213.65 01:35:19.887 clat percentiles (usec): 01:35:19.887 | 1.00th=[ 99], 5.00th=[ 131], 10.00th=[ 145], 20.00th=[ 149], 01:35:19.887 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 01:35:19.887 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 180], 01:35:19.887 | 99.00th=[ 206], 99.50th=[ 318], 99.90th=[ 1434], 99.95th=[ 7701], 01:35:19.887 | 99.99th=[ 7767] 01:35:19.887 bw ( KiB/s): min=12288, max=12288, per=20.59%, avg=12288.00, stdev= 0.00, samples=1 01:35:19.887 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 01:35:19.888 lat (usec) : 100=0.60%, 250=98.24%, 500=0.95%, 750=0.14%, 1000=0.02% 01:35:19.888 lat (msec) : 2=0.02%, 10=0.04% 01:35:19.888 cpu : usr=0.80%, sys=3.80%, ctx=5161, majf=0, minf=7 01:35:19.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:35:19.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:19.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:19.888 issued rwts: total=2560,2600,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:19.888 latency : target=0, window=0, percentile=100.00%, depth=1 01:35:19.888 01:35:19.888 Run status group 0 (all jobs): 01:35:19.888 READ: bw=54.1MiB/s (56.7MB/s), 9.99MiB/s-18.0MiB/s (10.5MB/s-18.9MB/s), io=54.1MiB (56.8MB), run=1001-1001msec 01:35:19.888 WRITE: bw=58.3MiB/s (61.1MB/s), 10.1MiB/s-18.7MiB/s (10.6MB/s-19.7MB/s), io=58.4MiB (61.2MB), run=1001-1001msec 01:35:19.888 01:35:19.888 Disk stats (read/write): 01:35:19.888 nvme0n1: ios=2145/2560, merge=0/0, ticks=448/407, in_queue=855, util=89.18% 01:35:19.888 nvme0n2: ios=3885/4096, merge=0/0, ticks=509/369, in_queue=878, util=93.76% 01:35:19.888 nvme0n3: ios=4148/4176, merge=0/0, ticks=531/370, in_queue=901, util=93.99% 01:35:19.888 nvme0n4: ios=2048/2463, merge=0/0, ticks=422/412, in_queue=834, util=89.38% 01:35:19.888 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 01:35:19.888 [global] 01:35:19.888 thread=1 01:35:19.888 invalidate=1 01:35:19.888 rw=randwrite 01:35:19.888 time_based=1 01:35:19.888 runtime=1 01:35:19.888 ioengine=libaio 01:35:19.888 direct=1 01:35:19.888 bs=4096 01:35:19.888 iodepth=1 01:35:19.888 norandommap=0 01:35:19.888 numjobs=1 01:35:19.888 01:35:19.888 verify_dump=1 01:35:19.888 verify_backlog=512 01:35:19.888 verify_state_save=0 01:35:19.888 do_verify=1 01:35:19.888 verify=crc32c-intel 01:35:19.888 [job0] 01:35:19.888 filename=/dev/nvme0n1 01:35:19.888 [job1] 01:35:19.888 filename=/dev/nvme0n2 01:35:19.888 [job2] 01:35:19.888 filename=/dev/nvme0n3 01:35:19.888 [job3] 01:35:19.888 filename=/dev/nvme0n4 01:35:19.888 Could not set queue depth (nvme0n1) 01:35:19.888 Could not set queue depth (nvme0n2) 01:35:19.888 Could not set queue depth (nvme0n3) 01:35:19.888 Could not set queue depth (nvme0n4) 01:35:19.888 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:35:19.888 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:35:19.888 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:35:19.888 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:35:19.888 fio-3.35 01:35:19.888 Starting 4 threads 01:35:21.261 01:35:21.261 job0: (groupid=0, jobs=1): err= 0: pid=105447: Mon Dec 9 05:30:12 2024 01:35:21.261 read: IOPS=3376, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1001msec) 01:35:21.261 slat (nsec): min=6213, max=69137, avg=7471.78, stdev=3367.22 01:35:21.261 clat (usec): min=92, max=734, avg=152.16, stdev=23.50 01:35:21.261 lat (usec): min=99, max=741, avg=159.63, stdev=24.30 01:35:21.261 clat percentiles (usec): 01:35:21.261 | 1.00th=[ 98], 5.00th=[ 113], 10.00th=[ 139], 20.00th=[ 143], 01:35:21.261 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 153], 01:35:21.261 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 172], 95.00th=[ 192], 01:35:21.261 | 99.00th=[ 215], 99.50th=[ 225], 99.90th=[ 318], 99.95th=[ 478], 01:35:21.261 | 99.99th=[ 734] 01:35:21.261 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 01:35:21.261 slat (nsec): min=9275, max=94564, avg=11607.46, stdev=5871.04 01:35:21.261 clat (usec): min=57, max=1257, avg=115.17, stdev=24.36 01:35:21.261 lat (usec): min=77, max=1267, avg=126.78, stdev=24.30 01:35:21.261 clat percentiles (usec): 01:35:21.261 | 1.00th=[ 80], 5.00th=[ 101], 10.00th=[ 105], 20.00th=[ 108], 01:35:21.261 | 30.00th=[ 110], 40.00th=[ 112], 50.00th=[ 113], 60.00th=[ 115], 01:35:21.261 | 70.00th=[ 118], 80.00th=[ 122], 90.00th=[ 130], 95.00th=[ 137], 01:35:21.261 | 99.00th=[ 153], 99.50th=[ 206], 99.90th=[ 310], 99.95th=[ 371], 01:35:21.261 | 99.99th=[ 1254] 01:35:21.261 bw ( KiB/s): min=16224, max=16224, per=24.52%, avg=16224.00, stdev= 0.00, samples=1 01:35:21.261 iops : min= 4056, max= 4056, avg=4056.00, stdev= 0.00, samples=1 01:35:21.261 lat (usec) : 100=3.06%, 250=96.73%, 500=0.19%, 750=0.01% 01:35:21.261 lat (msec) : 2=0.01% 01:35:21.261 cpu : usr=0.80%, sys=5.40%, ctx=6964, majf=0, minf=13 01:35:21.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:35:21.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:21.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:21.261 issued rwts: total=3380,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:21.261 latency : target=0, window=0, percentile=100.00%, depth=1 01:35:21.261 job1: (groupid=0, jobs=1): err= 0: pid=105448: Mon Dec 9 05:30:12 2024 01:35:21.261 read: IOPS=3397, BW=13.3MiB/s (13.9MB/s)(13.3MiB/1001msec) 01:35:21.261 slat (nsec): min=6157, max=81914, avg=7249.52, stdev=2983.18 01:35:21.261 clat (usec): min=86, max=1493, avg=151.82, stdev=31.67 01:35:21.261 lat (usec): min=92, max=1506, avg=159.07, stdev=32.14 01:35:21.261 clat percentiles (usec): 01:35:21.261 | 1.00th=[ 98], 5.00th=[ 109], 10.00th=[ 137], 20.00th=[ 143], 01:35:21.261 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 153], 01:35:21.261 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 172], 95.00th=[ 194], 01:35:21.261 | 99.00th=[ 212], 99.50th=[ 221], 99.90th=[ 310], 99.95th=[ 494], 01:35:21.261 | 99.99th=[ 1500] 01:35:21.261 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 01:35:21.261 slat (nsec): min=9139, max=84216, avg=10686.09, stdev=3881.05 01:35:21.261 clat (usec): min=65, max=1266, avg=115.78, stdev=28.01 01:35:21.261 lat (usec): min=76, max=1277, avg=126.47, stdev=28.04 01:35:21.261 clat percentiles (usec): 01:35:21.261 | 1.00th=[ 78], 5.00th=[ 103], 10.00th=[ 106], 20.00th=[ 109], 01:35:21.261 | 30.00th=[ 111], 40.00th=[ 112], 50.00th=[ 114], 60.00th=[ 116], 01:35:21.261 | 70.00th=[ 118], 80.00th=[ 122], 90.00th=[ 130], 95.00th=[ 137], 01:35:21.261 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 363], 99.95th=[ 979], 01:35:21.261 | 99.99th=[ 1270] 01:35:21.261 bw ( KiB/s): min=15856, max=15856, per=23.96%, avg=15856.00, stdev= 0.00, samples=1 01:35:21.261 iops : min= 3964, max= 3964, avg=3964.00, stdev= 0.00, samples=1 01:35:21.261 lat (usec) : 100=2.58%, 250=97.18%, 500=0.20%, 1000=0.01% 01:35:21.261 lat (msec) : 2=0.03% 01:35:21.261 cpu : usr=1.00%, sys=4.90%, ctx=6986, majf=0, minf=10 01:35:21.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:35:21.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:21.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:21.261 issued rwts: total=3401,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:21.261 latency : target=0, window=0, percentile=100.00%, depth=1 01:35:21.261 job2: (groupid=0, jobs=1): err= 0: pid=105449: Mon Dec 9 05:30:12 2024 01:35:21.261 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 01:35:21.261 slat (nsec): min=6237, max=24825, avg=6848.46, stdev=806.55 01:35:21.261 clat (usec): min=99, max=1174, avg=121.80, stdev=19.45 01:35:21.261 lat (usec): min=105, max=1181, avg=128.65, stdev=19.45 01:35:21.261 clat percentiles (usec): 01:35:21.261 | 1.00th=[ 109], 5.00th=[ 111], 10.00th=[ 113], 20.00th=[ 115], 01:35:21.261 | 30.00th=[ 117], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 122], 01:35:21.261 | 70.00th=[ 124], 80.00th=[ 128], 90.00th=[ 135], 95.00th=[ 141], 01:35:21.261 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 190], 99.95th=[ 200], 01:35:21.261 | 99.99th=[ 1172] 01:35:21.261 write: IOPS=4514, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1001msec); 0 zone resets 01:35:21.261 slat (nsec): min=9509, max=94446, avg=10803.97, stdev=3224.88 01:35:21.261 clat (usec): min=75, max=327, avg=92.38, stdev= 9.98 01:35:21.261 lat (usec): min=86, max=356, avg=103.19, stdev=10.87 01:35:21.261 clat percentiles (usec): 01:35:21.261 | 1.00th=[ 81], 5.00th=[ 83], 10.00th=[ 84], 20.00th=[ 86], 01:35:21.261 | 30.00th=[ 88], 40.00th=[ 89], 50.00th=[ 90], 60.00th=[ 92], 01:35:21.261 | 70.00th=[ 94], 80.00th=[ 97], 90.00th=[ 103], 95.00th=[ 111], 01:35:21.261 | 99.00th=[ 129], 99.50th=[ 137], 99.90th=[ 157], 99.95th=[ 167], 01:35:21.261 | 99.99th=[ 326] 01:35:21.261 bw ( KiB/s): min=17944, max=17944, per=27.11%, avg=17944.00, stdev= 0.00, samples=1 01:35:21.261 iops : min= 4486, max= 4486, avg=4486.00, stdev= 0.00, samples=1 01:35:21.261 lat (usec) : 100=44.97%, 250=55.01%, 500=0.01% 01:35:21.261 lat (msec) : 2=0.01% 01:35:21.261 cpu : usr=1.80%, sys=5.40%, ctx=8615, majf=0, minf=9 01:35:21.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:35:21.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:21.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:21.261 issued rwts: total=4096,4519,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:21.261 latency : target=0, window=0, percentile=100.00%, depth=1 01:35:21.261 job3: (groupid=0, jobs=1): err= 0: pid=105450: Mon Dec 9 05:30:12 2024 01:35:21.261 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 01:35:21.261 slat (nsec): min=6189, max=26150, avg=6877.15, stdev=867.64 01:35:21.261 clat (usec): min=87, max=393, avg=108.20, stdev=11.23 01:35:21.261 lat (usec): min=96, max=399, avg=115.08, stdev=11.24 01:35:21.261 clat percentiles (usec): 01:35:21.261 | 1.00th=[ 96], 5.00th=[ 98], 10.00th=[ 100], 20.00th=[ 102], 01:35:21.261 | 30.00th=[ 103], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 109], 01:35:21.261 | 70.00th=[ 111], 80.00th=[ 114], 90.00th=[ 119], 95.00th=[ 126], 01:35:21.261 | 99.00th=[ 147], 99.50th=[ 155], 99.90th=[ 178], 99.95th=[ 293], 01:35:21.261 | 99.99th=[ 392] 01:35:21.261 write: IOPS=4869, BW=19.0MiB/s (19.9MB/s)(19.0MiB/1001msec); 0 zone resets 01:35:21.261 slat (nsec): min=9424, max=90840, avg=10756.51, stdev=3539.96 01:35:21.261 clat (usec): min=69, max=312, avg=84.15, stdev=10.17 01:35:21.261 lat (usec): min=79, max=322, avg=94.91, stdev=11.15 01:35:21.261 clat percentiles (usec): 01:35:21.261 | 1.00th=[ 74], 5.00th=[ 76], 10.00th=[ 77], 20.00th=[ 78], 01:35:21.261 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 82], 60.00th=[ 84], 01:35:21.261 | 70.00th=[ 86], 80.00th=[ 89], 90.00th=[ 94], 95.00th=[ 102], 01:35:21.261 | 99.00th=[ 122], 99.50th=[ 133], 99.90th=[ 153], 99.95th=[ 167], 01:35:21.261 | 99.99th=[ 314] 01:35:21.261 bw ( KiB/s): min=20480, max=20480, per=30.95%, avg=20480.00, stdev= 0.00, samples=1 01:35:21.261 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 01:35:21.261 lat (usec) : 100=54.09%, 250=45.86%, 500=0.05% 01:35:21.261 cpu : usr=1.80%, sys=6.00%, ctx=9483, majf=0, minf=17 01:35:21.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:35:21.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:21.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:21.261 issued rwts: total=4608,4874,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:21.261 latency : target=0, window=0, percentile=100.00%, depth=1 01:35:21.261 01:35:21.261 Run status group 0 (all jobs): 01:35:21.261 READ: bw=60.4MiB/s (63.4MB/s), 13.2MiB/s-18.0MiB/s (13.8MB/s-18.9MB/s), io=60.5MiB (63.4MB), run=1001-1001msec 01:35:21.261 WRITE: bw=64.6MiB/s (67.8MB/s), 14.0MiB/s-19.0MiB/s (14.7MB/s-19.9MB/s), io=64.7MiB (67.8MB), run=1001-1001msec 01:35:21.261 01:35:21.261 Disk stats (read/write): 01:35:21.261 nvme0n1: ios=3018/3072, merge=0/0, ticks=475/369, in_queue=844, util=88.28% 01:35:21.262 nvme0n2: ios=3030/3072, merge=0/0, ticks=494/374, in_queue=868, util=89.44% 01:35:21.262 nvme0n3: ios=3590/3898, merge=0/0, ticks=435/380, in_queue=815, util=89.53% 01:35:21.262 nvme0n4: ios=4096/4129, merge=0/0, ticks=451/362, in_queue=813, util=89.89% 01:35:21.262 05:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 01:35:21.262 [global] 01:35:21.262 thread=1 01:35:21.262 invalidate=1 01:35:21.262 rw=write 01:35:21.262 time_based=1 01:35:21.262 runtime=1 01:35:21.262 ioengine=libaio 01:35:21.262 direct=1 01:35:21.262 bs=4096 01:35:21.262 iodepth=128 01:35:21.262 norandommap=0 01:35:21.262 numjobs=1 01:35:21.262 01:35:21.262 verify_dump=1 01:35:21.262 verify_backlog=512 01:35:21.262 verify_state_save=0 01:35:21.262 do_verify=1 01:35:21.262 verify=crc32c-intel 01:35:21.262 [job0] 01:35:21.262 filename=/dev/nvme0n1 01:35:21.262 [job1] 01:35:21.262 filename=/dev/nvme0n2 01:35:21.262 [job2] 01:35:21.262 filename=/dev/nvme0n3 01:35:21.262 [job3] 01:35:21.262 filename=/dev/nvme0n4 01:35:21.262 Could not set queue depth (nvme0n1) 01:35:21.262 Could not set queue depth (nvme0n2) 01:35:21.262 Could not set queue depth (nvme0n3) 01:35:21.262 Could not set queue depth (nvme0n4) 01:35:21.262 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:35:21.262 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:35:21.262 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:35:21.262 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:35:21.262 fio-3.35 01:35:21.262 Starting 4 threads 01:35:22.631 01:35:22.631 job0: (groupid=0, jobs=1): err= 0: pid=105510: Mon Dec 9 05:30:13 2024 01:35:22.631 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 01:35:22.631 slat (usec): min=2, max=9067, avg=102.23, stdev=592.47 01:35:22.631 clat (usec): min=4321, max=35093, avg=12527.03, stdev=4300.74 01:35:22.631 lat (usec): min=4329, max=35103, avg=12629.25, stdev=4348.74 01:35:22.631 clat percentiles (usec): 01:35:22.631 | 1.00th=[ 5473], 5.00th=[ 7701], 10.00th=[ 8094], 20.00th=[ 8356], 01:35:22.631 | 30.00th=[ 9503], 40.00th=[10683], 50.00th=[12125], 60.00th=[13173], 01:35:22.631 | 70.00th=[14353], 80.00th=[16057], 90.00th=[18220], 95.00th=[19792], 01:35:22.631 | 99.00th=[25035], 99.50th=[30016], 99.90th=[34866], 99.95th=[34866], 01:35:22.631 | 99.99th=[34866] 01:35:22.631 write: IOPS=5005, BW=19.6MiB/s (20.5MB/s)(19.7MiB/1007msec); 0 zone resets 01:35:22.631 slat (usec): min=3, max=7420, avg=100.91, stdev=494.72 01:35:22.631 clat (usec): min=3202, max=44245, avg=13866.15, stdev=7264.50 01:35:22.631 lat (usec): min=3228, max=44255, avg=13967.06, stdev=7322.00 01:35:22.631 clat percentiles (usec): 01:35:22.631 | 1.00th=[ 4228], 5.00th=[ 6325], 10.00th=[ 6849], 20.00th=[ 7570], 01:35:22.631 | 30.00th=[ 8979], 40.00th=[13042], 50.00th=[14222], 60.00th=[14746], 01:35:22.631 | 70.00th=[15139], 80.00th=[15401], 90.00th=[19530], 95.00th=[30540], 01:35:22.631 | 99.00th=[42206], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 01:35:22.631 | 99.99th=[44303] 01:35:22.631 bw ( KiB/s): min=17696, max=21616, per=19.16%, avg=19656.00, stdev=2771.86, samples=2 01:35:22.631 iops : min= 4424, max= 5404, avg=4914.00, stdev=692.96, samples=2 01:35:22.631 lat (msec) : 4=0.29%, 10=35.03%, 20=57.44%, 50=7.24% 01:35:22.631 cpu : usr=2.68%, sys=6.46%, ctx=810, majf=0, minf=17 01:35:22.631 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 01:35:22.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:22.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:35:22.631 issued rwts: total=4608,5041,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:22.631 latency : target=0, window=0, percentile=100.00%, depth=128 01:35:22.631 job1: (groupid=0, jobs=1): err= 0: pid=105511: Mon Dec 9 05:30:13 2024 01:35:22.631 read: IOPS=8940, BW=34.9MiB/s (36.6MB/s)(35.0MiB/1002msec) 01:35:22.631 slat (usec): min=3, max=1711, avg=54.17, stdev=208.17 01:35:22.631 clat (usec): min=324, max=8740, avg=7092.48, stdev=659.65 01:35:22.631 lat (usec): min=1315, max=8748, avg=7146.65, stdev=657.89 01:35:22.631 clat percentiles (usec): 01:35:22.631 | 1.00th=[ 5342], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6718], 01:35:22.631 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7242], 01:35:22.631 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7767], 95.00th=[ 7963], 01:35:22.631 | 99.00th=[ 8356], 99.50th=[ 8455], 99.90th=[ 8586], 99.95th=[ 8717], 01:35:22.631 | 99.99th=[ 8717] 01:35:22.631 write: IOPS=9197, BW=35.9MiB/s (37.7MB/s)(36.0MiB/1002msec); 0 zone resets 01:35:22.631 slat (usec): min=5, max=2568, avg=52.19, stdev=199.80 01:35:22.631 clat (usec): min=5161, max=8400, avg=6859.77, stdev=443.76 01:35:22.631 lat (usec): min=5276, max=8411, avg=6911.97, stdev=406.55 01:35:22.631 clat percentiles (usec): 01:35:22.631 | 1.00th=[ 5473], 5.00th=[ 5997], 10.00th=[ 6456], 20.00th=[ 6652], 01:35:22.631 | 30.00th=[ 6718], 40.00th=[ 6783], 50.00th=[ 6849], 60.00th=[ 6915], 01:35:22.632 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7308], 95.00th=[ 7701], 01:35:22.632 | 99.00th=[ 8094], 99.50th=[ 8225], 99.90th=[ 8356], 99.95th=[ 8356], 01:35:22.632 | 99.99th=[ 8455] 01:35:22.632 bw ( KiB/s): min=36864, max=36937, per=35.97%, avg=36900.50, stdev=51.62, samples=2 01:35:22.632 iops : min= 9216, max= 9234, avg=9225.00, stdev=12.73, samples=2 01:35:22.632 lat (usec) : 500=0.01% 01:35:22.632 lat (msec) : 2=0.14%, 4=0.20%, 10=99.65% 01:35:22.632 cpu : usr=3.30%, sys=12.79%, ctx=1093, majf=0, minf=9 01:35:22.632 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 01:35:22.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:22.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:35:22.632 issued rwts: total=8958,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:22.632 latency : target=0, window=0, percentile=100.00%, depth=128 01:35:22.632 job2: (groupid=0, jobs=1): err= 0: pid=105512: Mon Dec 9 05:30:13 2024 01:35:22.632 read: IOPS=7466, BW=29.2MiB/s (30.6MB/s)(29.2MiB/1001msec) 01:35:22.632 slat (usec): min=4, max=2939, avg=66.14, stdev=252.92 01:35:22.632 clat (usec): min=362, max=11596, avg=8528.26, stdev=836.68 01:35:22.632 lat (usec): min=1587, max=11604, avg=8594.40, stdev=833.10 01:35:22.632 clat percentiles (usec): 01:35:22.632 | 1.00th=[ 5866], 5.00th=[ 7439], 10.00th=[ 7701], 20.00th=[ 8094], 01:35:22.632 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8717], 01:35:22.632 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9634], 01:35:22.632 | 99.00th=[10290], 99.50th=[10945], 99.90th=[11469], 99.95th=[11600], 01:35:22.632 | 99.99th=[11600] 01:35:22.632 write: IOPS=7672, BW=30.0MiB/s (31.4MB/s)(30.0MiB/1001msec); 0 zone resets 01:35:22.632 slat (usec): min=6, max=2005, avg=61.96, stdev=260.38 01:35:22.632 clat (usec): min=6193, max=10041, avg=8192.93, stdev=526.85 01:35:22.632 lat (usec): min=6493, max=10138, avg=8254.89, stdev=477.83 01:35:22.632 clat percentiles (usec): 01:35:22.632 | 1.00th=[ 6521], 5.00th=[ 6980], 10.00th=[ 7767], 20.00th=[ 7963], 01:35:22.632 | 30.00th=[ 8094], 40.00th=[ 8160], 50.00th=[ 8225], 60.00th=[ 8291], 01:35:22.632 | 70.00th=[ 8356], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 01:35:22.632 | 99.00th=[ 9634], 99.50th=[ 9765], 99.90th=[10028], 99.95th=[10028], 01:35:22.632 | 99.99th=[10028] 01:35:22.632 bw ( KiB/s): min=30416, max=31086, per=29.97%, avg=30751.00, stdev=473.76, samples=2 01:35:22.632 iops : min= 7604, max= 7771, avg=7687.50, stdev=118.09, samples=2 01:35:22.632 lat (usec) : 500=0.01% 01:35:22.632 lat (msec) : 2=0.10%, 4=0.21%, 10=98.79%, 20=0.89% 01:35:22.632 cpu : usr=3.00%, sys=11.30%, ctx=823, majf=0, minf=11 01:35:22.632 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 01:35:22.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:22.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:35:22.632 issued rwts: total=7474,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:22.632 latency : target=0, window=0, percentile=100.00%, depth=128 01:35:22.632 job3: (groupid=0, jobs=1): err= 0: pid=105513: Mon Dec 9 05:30:13 2024 01:35:22.632 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 01:35:22.632 slat (usec): min=2, max=15075, avg=135.03, stdev=752.85 01:35:22.632 clat (usec): min=8372, max=61794, avg=18007.33, stdev=8842.51 01:35:22.632 lat (usec): min=8381, max=61894, avg=18142.35, stdev=8900.90 01:35:22.632 clat percentiles (usec): 01:35:22.632 | 1.00th=[ 9503], 5.00th=[10814], 10.00th=[10814], 20.00th=[11338], 01:35:22.632 | 30.00th=[12911], 40.00th=[13829], 50.00th=[15008], 60.00th=[16188], 01:35:22.632 | 70.00th=[18220], 80.00th=[23725], 90.00th=[30278], 95.00th=[35914], 01:35:22.632 | 99.00th=[52167], 99.50th=[58983], 99.90th=[61604], 99.95th=[61604], 01:35:22.632 | 99.99th=[61604] 01:35:22.632 write: IOPS=3862, BW=15.1MiB/s (15.8MB/s)(15.2MiB/1007msec); 0 zone resets 01:35:22.632 slat (usec): min=4, max=15594, avg=128.81, stdev=696.31 01:35:22.632 clat (usec): min=5672, max=62593, avg=15708.46, stdev=7355.93 01:35:22.632 lat (usec): min=6095, max=63305, avg=15837.26, stdev=7424.36 01:35:22.632 clat percentiles (usec): 01:35:22.632 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[10945], 01:35:22.632 | 30.00th=[13698], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 01:35:22.632 | 70.00th=[15401], 80.00th=[16057], 90.00th=[20317], 95.00th=[26346], 01:35:22.632 | 99.00th=[55313], 99.50th=[58983], 99.90th=[62653], 99.95th=[62653], 01:35:22.632 | 99.99th=[62653] 01:35:22.632 bw ( KiB/s): min=13482, max=16640, per=14.68%, avg=15061.00, stdev=2233.04, samples=2 01:35:22.632 iops : min= 3370, max= 4160, avg=3765.00, stdev=558.61, samples=2 01:35:22.632 lat (msec) : 10=9.59%, 20=72.91%, 50=16.18%, 100=1.32% 01:35:22.632 cpu : usr=1.39%, sys=6.06%, ctx=748, majf=0, minf=13 01:35:22.632 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 01:35:22.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:22.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:35:22.632 issued rwts: total=3584,3890,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:22.632 latency : target=0, window=0, percentile=100.00%, depth=128 01:35:22.632 01:35:22.632 Run status group 0 (all jobs): 01:35:22.632 READ: bw=95.5MiB/s (100MB/s), 13.9MiB/s-34.9MiB/s (14.6MB/s-36.6MB/s), io=96.2MiB (101MB), run=1001-1007msec 01:35:22.632 WRITE: bw=100MiB/s (105MB/s), 15.1MiB/s-35.9MiB/s (15.8MB/s-37.7MB/s), io=101MiB (106MB), run=1001-1007msec 01:35:22.632 01:35:22.632 Disk stats (read/write): 01:35:22.632 nvme0n1: ios=4146/4511, merge=0/0, ticks=38706/49934, in_queue=88640, util=89.48% 01:35:22.632 nvme0n2: ios=7873/8192, merge=0/0, ticks=13332/12344, in_queue=25676, util=90.44% 01:35:22.632 nvme0n3: ios=6693/6711, merge=0/0, ticks=14029/11941, in_queue=25970, util=90.47% 01:35:22.632 nvme0n4: ios=3078/3382, merge=0/0, ticks=20530/21069, in_queue=41599, util=89.52% 01:35:22.632 05:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 01:35:22.632 [global] 01:35:22.632 thread=1 01:35:22.632 invalidate=1 01:35:22.632 rw=randwrite 01:35:22.632 time_based=1 01:35:22.632 runtime=1 01:35:22.632 ioengine=libaio 01:35:22.632 direct=1 01:35:22.632 bs=4096 01:35:22.632 iodepth=128 01:35:22.632 norandommap=0 01:35:22.632 numjobs=1 01:35:22.632 01:35:22.632 verify_dump=1 01:35:22.632 verify_backlog=512 01:35:22.632 verify_state_save=0 01:35:22.632 do_verify=1 01:35:22.632 verify=crc32c-intel 01:35:22.632 [job0] 01:35:22.632 filename=/dev/nvme0n1 01:35:22.632 [job1] 01:35:22.632 filename=/dev/nvme0n2 01:35:22.632 [job2] 01:35:22.632 filename=/dev/nvme0n3 01:35:22.632 [job3] 01:35:22.632 filename=/dev/nvme0n4 01:35:22.632 Could not set queue depth (nvme0n1) 01:35:22.632 Could not set queue depth (nvme0n2) 01:35:22.632 Could not set queue depth (nvme0n3) 01:35:22.632 Could not set queue depth (nvme0n4) 01:35:22.632 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:35:22.632 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:35:22.632 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:35:22.632 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:35:22.632 fio-3.35 01:35:22.632 Starting 4 threads 01:35:24.020 01:35:24.020 job0: (groupid=0, jobs=1): err= 0: pid=105569: Mon Dec 9 05:30:15 2024 01:35:24.020 read: IOPS=3187, BW=12.4MiB/s (13.1MB/s)(12.5MiB/1005msec) 01:35:24.020 slat (usec): min=2, max=7431, avg=154.35, stdev=783.23 01:35:24.020 clat (usec): min=696, max=33344, avg=19423.25, stdev=5801.03 01:35:24.020 lat (usec): min=4419, max=33355, avg=19577.60, stdev=5799.80 01:35:24.020 clat percentiles (usec): 01:35:24.020 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[14877], 01:35:24.020 | 30.00th=[16319], 40.00th=[18220], 50.00th=[19792], 60.00th=[21365], 01:35:24.020 | 70.00th=[23200], 80.00th=[24773], 90.00th=[26608], 95.00th=[27919], 01:35:24.020 | 99.00th=[31851], 99.50th=[32113], 99.90th=[33424], 99.95th=[33424], 01:35:24.020 | 99.99th=[33424] 01:35:24.020 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 01:35:24.020 slat (usec): min=6, max=9282, avg=137.96, stdev=652.82 01:35:24.020 clat (usec): min=5576, max=37904, avg=18035.39, stdev=6853.63 01:35:24.020 lat (usec): min=6986, max=38896, avg=18173.35, stdev=6881.53 01:35:24.020 clat percentiles (usec): 01:35:24.020 | 1.00th=[ 6980], 5.00th=[ 7308], 10.00th=[ 8225], 20.00th=[12518], 01:35:24.020 | 30.00th=[14222], 40.00th=[15533], 50.00th=[17171], 60.00th=[19268], 01:35:24.020 | 70.00th=[21627], 80.00th=[24773], 90.00th=[26870], 95.00th=[30016], 01:35:24.020 | 99.00th=[35390], 99.50th=[35914], 99.90th=[37487], 99.95th=[37487], 01:35:24.020 | 99.99th=[38011] 01:35:24.020 bw ( KiB/s): min=13984, max=14717, per=17.60%, avg=14350.50, stdev=518.31, samples=2 01:35:24.020 iops : min= 3496, max= 3679, avg=3587.50, stdev=129.40, samples=2 01:35:24.020 lat (usec) : 750=0.01% 01:35:24.020 lat (msec) : 10=13.30%, 20=44.32%, 50=42.36% 01:35:24.020 cpu : usr=1.49%, sys=5.78%, ctx=555, majf=0, minf=9 01:35:24.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 01:35:24.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:24.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:35:24.020 issued rwts: total=3203,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:24.020 latency : target=0, window=0, percentile=100.00%, depth=128 01:35:24.020 job1: (groupid=0, jobs=1): err= 0: pid=105570: Mon Dec 9 05:30:15 2024 01:35:24.020 read: IOPS=3510, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1004msec) 01:35:24.020 slat (usec): min=2, max=10382, avg=152.46, stdev=771.87 01:35:24.020 clat (usec): min=457, max=44918, avg=18620.46, stdev=6029.86 01:35:24.020 lat (usec): min=4188, max=44934, avg=18772.92, stdev=6042.50 01:35:24.020 clat percentiles (usec): 01:35:24.020 | 1.00th=[ 7177], 5.00th=[ 9110], 10.00th=[11469], 20.00th=[14353], 01:35:24.020 | 30.00th=[15139], 40.00th=[16581], 50.00th=[18220], 60.00th=[19268], 01:35:24.020 | 70.00th=[21103], 80.00th=[23462], 90.00th=[26346], 95.00th=[30540], 01:35:24.020 | 99.00th=[34866], 99.50th=[35914], 99.90th=[38536], 99.95th=[38536], 01:35:24.020 | 99.99th=[44827] 01:35:24.020 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 01:35:24.020 slat (usec): min=6, max=5661, avg=125.88, stdev=518.27 01:35:24.020 clat (usec): min=8847, max=44362, avg=17043.27, stdev=5619.67 01:35:24.020 lat (usec): min=8965, max=44372, avg=17169.15, stdev=5631.30 01:35:24.020 clat percentiles (usec): 01:35:24.020 | 1.00th=[ 9503], 5.00th=[11207], 10.00th=[11863], 20.00th=[12518], 01:35:24.020 | 30.00th=[13435], 40.00th=[14222], 50.00th=[15401], 60.00th=[17171], 01:35:24.020 | 70.00th=[19006], 80.00th=[21103], 90.00th=[23462], 95.00th=[25560], 01:35:24.020 | 99.00th=[39584], 99.50th=[40109], 99.90th=[41157], 99.95th=[44303], 01:35:24.020 | 99.99th=[44303] 01:35:24.020 bw ( KiB/s): min=12288, max=16416, per=17.61%, avg=14352.00, stdev=2918.94, samples=2 01:35:24.020 iops : min= 3072, max= 4104, avg=3588.00, stdev=729.73, samples=2 01:35:24.020 lat (usec) : 500=0.01% 01:35:24.020 lat (msec) : 10=5.04%, 20=65.31%, 50=29.64% 01:35:24.020 cpu : usr=1.00%, sys=6.18%, ctx=628, majf=0, minf=9 01:35:24.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 01:35:24.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:24.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:35:24.020 issued rwts: total=3525,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:24.020 latency : target=0, window=0, percentile=100.00%, depth=128 01:35:24.020 job2: (groupid=0, jobs=1): err= 0: pid=105571: Mon Dec 9 05:30:15 2024 01:35:24.020 read: IOPS=8082, BW=31.6MiB/s (33.1MB/s)(31.6MiB/1002msec) 01:35:24.020 slat (usec): min=5, max=3707, avg=60.17, stdev=281.99 01:35:24.020 clat (usec): min=311, max=15869, avg=7752.46, stdev=1964.00 01:35:24.020 lat (usec): min=1393, max=15885, avg=7812.64, stdev=1983.87 01:35:24.020 clat percentiles (usec): 01:35:24.020 | 1.00th=[ 4752], 5.00th=[ 5866], 10.00th=[ 6128], 20.00th=[ 6456], 01:35:24.020 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 7046], 60.00th=[ 7373], 01:35:24.020 | 70.00th=[ 7898], 80.00th=[ 8979], 90.00th=[11076], 95.00th=[12125], 01:35:24.020 | 99.00th=[13435], 99.50th=[13566], 99.90th=[14746], 99.95th=[15401], 01:35:24.020 | 99.99th=[15926] 01:35:24.020 write: IOPS=8175, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1002msec); 0 zone resets 01:35:24.020 slat (usec): min=6, max=3202, avg=58.83, stdev=264.63 01:35:24.020 clat (usec): min=4788, max=18032, avg=7799.73, stdev=2174.29 01:35:24.020 lat (usec): min=4799, max=18046, avg=7858.55, stdev=2198.79 01:35:24.020 clat percentiles (usec): 01:35:24.020 | 1.00th=[ 5473], 5.00th=[ 6194], 10.00th=[ 6259], 20.00th=[ 6325], 01:35:24.021 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6915], 01:35:24.021 | 70.00th=[ 8094], 80.00th=[ 9241], 90.00th=[11076], 95.00th=[12125], 01:35:24.021 | 99.00th=[15926], 99.50th=[16909], 99.90th=[17695], 99.95th=[17957], 01:35:24.021 | 99.99th=[17957] 01:35:24.021 bw ( KiB/s): min=28672, max=36864, per=40.20%, avg=32768.00, stdev=5792.62, samples=2 01:35:24.021 iops : min= 7168, max= 9216, avg=8192.00, stdev=1448.15, samples=2 01:35:24.021 lat (usec) : 500=0.01% 01:35:24.021 lat (msec) : 2=0.08%, 4=0.18%, 10=83.35%, 20=16.39% 01:35:24.021 cpu : usr=3.60%, sys=10.99%, ctx=948, majf=0, minf=11 01:35:24.021 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 01:35:24.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:24.021 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:35:24.021 issued rwts: total=8099,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:24.021 latency : target=0, window=0, percentile=100.00%, depth=128 01:35:24.021 job3: (groupid=0, jobs=1): err= 0: pid=105572: Mon Dec 9 05:30:15 2024 01:35:24.021 read: IOPS=4995, BW=19.5MiB/s (20.5MB/s)(19.6MiB/1004msec) 01:35:24.021 slat (usec): min=3, max=8129, avg=89.39, stdev=481.86 01:35:24.021 clat (usec): min=1745, max=25957, avg=11079.80, stdev=2795.84 01:35:24.021 lat (usec): min=4354, max=25972, avg=11169.19, stdev=2825.29 01:35:24.021 clat percentiles (usec): 01:35:24.021 | 1.00th=[ 5800], 5.00th=[ 7898], 10.00th=[ 8356], 20.00th=[ 8979], 01:35:24.021 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10421], 60.00th=[10945], 01:35:24.021 | 70.00th=[12256], 80.00th=[13042], 90.00th=[14615], 95.00th=[16712], 01:35:24.021 | 99.00th=[20317], 99.50th=[22414], 99.90th=[25822], 99.95th=[25822], 01:35:24.021 | 99.99th=[26084] 01:35:24.021 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 01:35:24.021 slat (usec): min=7, max=6811, avg=103.76, stdev=461.44 01:35:24.021 clat (usec): min=5208, max=40111, avg=13961.82, stdev=7678.77 01:35:24.021 lat (usec): min=5220, max=40132, avg=14065.58, stdev=7735.49 01:35:24.021 clat percentiles (usec): 01:35:24.021 | 1.00th=[ 7046], 5.00th=[ 7504], 10.00th=[ 7570], 20.00th=[ 7767], 01:35:24.021 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[11731], 60.00th=[12387], 01:35:24.021 | 70.00th=[14484], 80.00th=[19268], 90.00th=[27132], 95.00th=[31851], 01:35:24.021 | 99.00th=[36439], 99.50th=[38011], 99.90th=[40109], 99.95th=[40109], 01:35:24.021 | 99.99th=[40109] 01:35:24.021 bw ( KiB/s): min=16384, max=24625, per=25.15%, avg=20504.50, stdev=5827.27, samples=2 01:35:24.021 iops : min= 4096, max= 6156, avg=5126.00, stdev=1456.64, samples=2 01:35:24.021 lat (msec) : 2=0.01%, 10=43.20%, 20=46.46%, 50=10.33% 01:35:24.021 cpu : usr=1.99%, sys=8.57%, ctx=506, majf=0, minf=10 01:35:24.021 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 01:35:24.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:24.021 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:35:24.021 issued rwts: total=5015,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:24.021 latency : target=0, window=0, percentile=100.00%, depth=128 01:35:24.021 01:35:24.021 Run status group 0 (all jobs): 01:35:24.021 READ: bw=77.1MiB/s (80.9MB/s), 12.4MiB/s-31.6MiB/s (13.1MB/s-33.1MB/s), io=77.5MiB (81.3MB), run=1002-1005msec 01:35:24.021 WRITE: bw=79.6MiB/s (83.5MB/s), 13.9MiB/s-31.9MiB/s (14.6MB/s-33.5MB/s), io=80.0MiB (83.9MB), run=1002-1005msec 01:35:24.021 01:35:24.021 Disk stats (read/write): 01:35:24.021 nvme0n1: ios=2866/3072, merge=0/0, ticks=13581/12855, in_queue=26436, util=89.48% 01:35:24.021 nvme0n2: ios=3070/3072, merge=0/0, ticks=14115/12501, in_queue=26616, util=90.14% 01:35:24.021 nvme0n3: ios=6931/7168, merge=0/0, ticks=17622/16810, in_queue=34432, util=90.37% 01:35:24.021 nvme0n4: ios=4608/4791, merge=0/0, ticks=24598/27974, in_queue=52572, util=89.93% 01:35:24.021 05:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 01:35:24.021 05:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=105586 01:35:24.021 05:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 01:35:24.021 05:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 01:35:24.021 [global] 01:35:24.021 thread=1 01:35:24.021 invalidate=1 01:35:24.021 rw=read 01:35:24.021 time_based=1 01:35:24.021 runtime=10 01:35:24.021 ioengine=libaio 01:35:24.021 direct=1 01:35:24.021 bs=4096 01:35:24.021 iodepth=1 01:35:24.021 norandommap=1 01:35:24.021 numjobs=1 01:35:24.021 01:35:24.021 [job0] 01:35:24.021 filename=/dev/nvme0n1 01:35:24.021 [job1] 01:35:24.021 filename=/dev/nvme0n2 01:35:24.021 [job2] 01:35:24.021 filename=/dev/nvme0n3 01:35:24.021 [job3] 01:35:24.021 filename=/dev/nvme0n4 01:35:24.021 Could not set queue depth (nvme0n1) 01:35:24.021 Could not set queue depth (nvme0n2) 01:35:24.021 Could not set queue depth (nvme0n3) 01:35:24.021 Could not set queue depth (nvme0n4) 01:35:24.021 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:35:24.021 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:35:24.021 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:35:24.021 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:35:24.021 fio-3.35 01:35:24.021 Starting 4 threads 01:35:27.299 05:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 01:35:27.299 fio: pid=105629, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:35:27.299 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=65294336, buflen=4096 01:35:27.299 05:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 01:35:27.299 fio: pid=105628, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:35:27.299 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=107151360, buflen=4096 01:35:27.299 05:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:35:27.299 05:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 01:35:27.299 fio: pid=105626, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:35:27.299 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=41799680, buflen=4096 01:35:27.299 05:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:35:27.299 05:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 01:35:27.558 fio: pid=105627, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:35:27.558 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=15491072, buflen=4096 01:35:27.558 05:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:35:27.558 05:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 01:35:27.558 01:35:27.558 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105626: Mon Dec 9 05:30:19 2024 01:35:27.558 read: IOPS=8239, BW=32.2MiB/s (33.7MB/s)(104MiB/3227msec) 01:35:27.558 slat (usec): min=4, max=15913, avg= 8.24, stdev=140.95 01:35:27.558 clat (usec): min=79, max=1600, avg=112.64, stdev=26.22 01:35:27.558 lat (usec): min=84, max=16017, avg=120.88, stdev=143.49 01:35:27.558 clat percentiles (usec): 01:35:27.558 | 1.00th=[ 88], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 95], 01:35:27.558 | 30.00th=[ 98], 40.00th=[ 102], 50.00th=[ 113], 60.00th=[ 118], 01:35:27.558 | 70.00th=[ 122], 80.00th=[ 126], 90.00th=[ 135], 95.00th=[ 143], 01:35:27.558 | 99.00th=[ 165], 99.50th=[ 200], 99.90th=[ 379], 99.95th=[ 429], 01:35:27.558 | 99.99th=[ 758] 01:35:27.558 bw ( KiB/s): min=30015, max=37408, per=31.93%, avg=33002.50, stdev=3112.69, samples=6 01:35:27.558 iops : min= 7503, max= 9352, avg=8250.50, stdev=778.32, samples=6 01:35:27.558 lat (usec) : 100=36.07%, 250=63.59%, 500=0.29%, 750=0.03%, 1000=0.01% 01:35:27.558 lat (msec) : 2=0.01% 01:35:27.558 cpu : usr=0.74%, sys=4.49%, ctx=26614, majf=0, minf=1 01:35:27.558 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:35:27.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:27.558 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:27.558 issued rwts: total=26590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:27.558 latency : target=0, window=0, percentile=100.00%, depth=1 01:35:27.558 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105627: Mon Dec 9 05:30:19 2024 01:35:27.558 read: IOPS=5864, BW=22.9MiB/s (24.0MB/s)(78.8MiB/3439msec) 01:35:27.558 slat (usec): min=4, max=12059, avg= 8.96, stdev=151.92 01:35:27.558 clat (usec): min=83, max=11524, avg=161.02, stdev=125.51 01:35:27.558 lat (usec): min=90, max=12169, avg=169.97, stdev=196.60 01:35:27.558 clat percentiles (usec): 01:35:27.558 | 1.00th=[ 91], 5.00th=[ 96], 10.00th=[ 99], 20.00th=[ 113], 01:35:27.558 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 174], 01:35:27.558 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 200], 01:35:27.558 | 99.00th=[ 227], 99.50th=[ 237], 99.90th=[ 334], 99.95th=[ 922], 01:35:27.558 | 99.99th=[ 7242] 01:35:27.558 bw ( KiB/s): min=21720, max=24709, per=21.54%, avg=22263.50, stdev=1198.66, samples=6 01:35:27.558 iops : min= 5430, max= 6177, avg=5565.83, stdev=299.56, samples=6 01:35:27.558 lat (usec) : 100=11.27%, 250=88.45%, 500=0.20%, 750=0.01%, 1000=0.01% 01:35:27.558 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01% 01:35:27.558 cpu : usr=0.61%, sys=3.81%, ctx=20176, majf=0, minf=2 01:35:27.558 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:35:27.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:27.558 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:27.558 issued rwts: total=20167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:27.558 latency : target=0, window=0, percentile=100.00%, depth=1 01:35:27.558 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105628: Mon Dec 9 05:30:19 2024 01:35:27.558 read: IOPS=8642, BW=33.8MiB/s (35.4MB/s)(102MiB/3027msec) 01:35:27.558 slat (usec): min=4, max=15799, avg= 7.74, stdev=114.88 01:35:27.558 clat (usec): min=71, max=1454, avg=107.48, stdev=19.19 01:35:27.558 lat (usec): min=95, max=15909, avg=115.22, stdev=116.95 01:35:27.558 clat percentiles (usec): 01:35:27.558 | 1.00th=[ 95], 5.00th=[ 97], 10.00th=[ 99], 20.00th=[ 101], 01:35:27.558 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 105], 60.00th=[ 106], 01:35:27.558 | 70.00th=[ 109], 80.00th=[ 112], 90.00th=[ 119], 95.00th=[ 127], 01:35:27.558 | 99.00th=[ 147], 99.50th=[ 153], 99.90th=[ 212], 99.95th=[ 293], 01:35:27.558 | 99.99th=[ 1434] 01:35:27.558 bw ( KiB/s): min=34512, max=35504, per=33.78%, avg=34912.00, stdev=462.62, samples=5 01:35:27.558 iops : min= 8628, max= 8876, avg=8728.00, stdev=115.65, samples=5 01:35:27.558 lat (usec) : 100=16.28%, 250=83.65%, 500=0.05% 01:35:27.558 lat (msec) : 2=0.02% 01:35:27.558 cpu : usr=0.46%, sys=5.32%, ctx=26164, majf=0, minf=2 01:35:27.558 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:35:27.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:27.558 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:27.558 issued rwts: total=26161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:27.558 latency : target=0, window=0, percentile=100.00%, depth=1 01:35:27.558 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105629: Mon Dec 9 05:30:19 2024 01:35:27.558 read: IOPS=5560, BW=21.7MiB/s (22.8MB/s)(62.3MiB/2867msec) 01:35:27.558 slat (nsec): min=4200, max=68297, avg=5145.52, stdev=1603.30 01:35:27.558 clat (usec): min=105, max=1249, avg=174.23, stdev=25.56 01:35:27.558 lat (usec): min=111, max=1254, avg=179.37, stdev=25.30 01:35:27.558 clat percentiles (usec): 01:35:27.558 | 1.00th=[ 114], 5.00th=[ 122], 10.00th=[ 161], 20.00th=[ 167], 01:35:27.558 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 178], 01:35:27.558 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 204], 01:35:27.558 | 99.00th=[ 227], 99.50th=[ 237], 99.90th=[ 269], 99.95th=[ 351], 01:35:27.558 | 99.99th=[ 1237] 01:35:27.558 bw ( KiB/s): min=21720, max=24672, per=21.63%, avg=22355.20, stdev=1295.83, samples=5 01:35:27.558 iops : min= 5430, max= 6168, avg=5588.80, stdev=323.96, samples=5 01:35:27.558 lat (usec) : 250=99.82%, 500=0.16% 01:35:27.558 lat (msec) : 2=0.02% 01:35:27.558 cpu : usr=0.49%, sys=2.79%, ctx=15945, majf=0, minf=2 01:35:27.558 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:35:27.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:27.558 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:35:27.558 issued rwts: total=15942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:35:27.558 latency : target=0, window=0, percentile=100.00%, depth=1 01:35:27.558 01:35:27.558 Run status group 0 (all jobs): 01:35:27.558 READ: bw=101MiB/s (106MB/s), 21.7MiB/s-33.8MiB/s (22.8MB/s-35.4MB/s), io=347MiB (364MB), run=2867-3439msec 01:35:27.558 01:35:27.558 Disk stats (read/write): 01:35:27.558 nvme0n1: ios=25734/0, merge=0/0, ticks=2939/0, in_queue=2939, util=95.35% 01:35:27.558 nvme0n2: ios=19665/0, merge=0/0, ticks=3185/0, in_queue=3185, util=95.36% 01:35:27.558 nvme0n3: ios=25160/0, merge=0/0, ticks=2751/0, in_queue=2751, util=96.68% 01:35:27.558 nvme0n4: ios=14693/0, merge=0/0, ticks=2447/0, in_queue=2447, util=96.49% 01:35:27.816 05:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:35:27.816 05:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 01:35:28.074 05:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:35:28.074 05:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 01:35:28.074 05:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:35:28.074 05:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 01:35:28.332 05:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:35:28.332 05:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 01:35:28.590 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 01:35:28.590 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 105586 01:35:28.590 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 01:35:28.590 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:35:28.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:35:28.848 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:35:28.848 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 01:35:28.848 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:35:28.848 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:35:28.848 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:35:28.848 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:35:28.848 nvmf hotplug test: fio failed as expected 01:35:28.848 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 01:35:28.848 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 01:35:28.848 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 01:35:28.848 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:35:28.848 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:35:29.107 rmmod nvme_tcp 01:35:29.107 rmmod nvme_fabrics 01:35:29.107 rmmod nvme_keyring 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 105109 ']' 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 105109 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 105109 ']' 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 105109 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105109 01:35:29.107 killing process with pid 105109 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105109' 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 105109 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 105109 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:35:29.107 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 01:35:29.368 01:35:29.368 real 0m17.779s 01:35:29.368 user 0m54.199s 01:35:29.368 sys 0m8.673s 01:35:29.368 ************************************ 01:35:29.368 END TEST nvmf_fio_target 01:35:29.368 ************************************ 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:35:29.368 ************************************ 01:35:29.368 START TEST nvmf_bdevio 01:35:29.368 ************************************ 01:35:29.368 05:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 01:35:29.637 * Looking for test storage... 01:35:29.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:35:29.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:35:29.637 --rc genhtml_branch_coverage=1 01:35:29.637 --rc genhtml_function_coverage=1 01:35:29.637 --rc genhtml_legend=1 01:35:29.637 --rc geninfo_all_blocks=1 01:35:29.637 --rc geninfo_unexecuted_blocks=1 01:35:29.637 01:35:29.637 ' 01:35:29.637 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:35:29.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:35:29.637 --rc genhtml_branch_coverage=1 01:35:29.637 --rc genhtml_function_coverage=1 01:35:29.638 --rc genhtml_legend=1 01:35:29.638 --rc geninfo_all_blocks=1 01:35:29.638 --rc geninfo_unexecuted_blocks=1 01:35:29.638 01:35:29.638 ' 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:35:29.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:35:29.638 --rc genhtml_branch_coverage=1 01:35:29.638 --rc genhtml_function_coverage=1 01:35:29.638 --rc genhtml_legend=1 01:35:29.638 --rc geninfo_all_blocks=1 01:35:29.638 --rc geninfo_unexecuted_blocks=1 01:35:29.638 01:35:29.638 ' 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:35:29.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:35:29.638 --rc genhtml_branch_coverage=1 01:35:29.638 --rc genhtml_function_coverage=1 01:35:29.638 --rc genhtml_legend=1 01:35:29.638 --rc geninfo_all_blocks=1 01:35:29.638 --rc geninfo_unexecuted_blocks=1 01:35:29.638 01:35:29.638 ' 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:35:29.638 Cannot find device "nvmf_init_br" 01:35:29.638 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:35:29.639 Cannot find device "nvmf_init_br2" 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:35:29.639 Cannot find device "nvmf_tgt_br" 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # true 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:35:29.639 Cannot find device "nvmf_tgt_br2" 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # true 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:35:29.639 Cannot find device "nvmf_init_br" 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:35:29.639 Cannot find device "nvmf_init_br2" 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:35:29.639 Cannot find device "nvmf_tgt_br" 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # true 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:35:29.639 Cannot find device "nvmf_tgt_br2" 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # true 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:35:29.639 Cannot find device "nvmf_br" 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # true 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:35:29.639 Cannot find device "nvmf_init_if" 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # true 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:35:29.639 Cannot find device "nvmf_init_if2" 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # true 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:35:29.639 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # true 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:35:29.639 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # true 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:35:29.639 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:35:29.897 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:35:29.897 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:35:29.897 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:35:29.897 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:35:29.897 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:35:29.897 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:35:29.897 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:35:29.897 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:35:29.897 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:35:29.897 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:35:29.897 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:35:29.897 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:35:29.897 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:35:29.897 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:35:29.897 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:35:29.897 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:35:29.897 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:35:29.897 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:35:29.897 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:35:29.897 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:35:29.898 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:35:29.898 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 01:35:29.898 01:35:29.898 --- 10.0.0.3 ping statistics --- 01:35:29.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:35:29.898 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:35:29.898 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:35:29.898 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 01:35:29.898 01:35:29.898 --- 10.0.0.4 ping statistics --- 01:35:29.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:35:29.898 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:35:29.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:35:29.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.014 ms 01:35:29.898 01:35:29.898 --- 10.0.0.1 ping statistics --- 01:35:29.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:35:29.898 rtt min/avg/max/mdev = 0.014/0.014/0.014/0.000 ms 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:35:29.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:35:29.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 01:35:29.898 01:35:29.898 --- 10.0.0.2 ping statistics --- 01:35:29.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:35:29.898 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=106000 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 106000 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 106000 ']' 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:35:29.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 01:35:29.898 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:35:29.898 [2024-12-09 05:30:21.476431] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:35:29.898 [2024-12-09 05:30:21.477188] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:35:29.898 [2024-12-09 05:30:21.477230] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:35:30.156 [2024-12-09 05:30:21.625842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:35:30.156 [2024-12-09 05:30:21.661815] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:35:30.156 [2024-12-09 05:30:21.662012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:35:30.156 [2024-12-09 05:30:21.662068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:35:30.156 [2024-12-09 05:30:21.662587] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:35:30.156 [2024-12-09 05:30:21.662636] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:35:30.156 [2024-12-09 05:30:21.663613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:35:30.156 [2024-12-09 05:30:21.663801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 01:35:30.156 [2024-12-09 05:30:21.664258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 01:35:30.156 [2024-12-09 05:30:21.664442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:35:30.156 [2024-12-09 05:30:21.719711] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:35:30.156 [2024-12-09 05:30:21.719973] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:35:30.156 [2024-12-09 05:30:21.721523] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:35:30.156 [2024-12-09 05:30:21.721659] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:35:30.156 [2024-12-09 05:30:21.722136] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:35:31.089 [2024-12-09 05:30:22.401796] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:35:31.089 Malloc0 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:35:31.089 [2024-12-09 05:30:22.469945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:35:31.089 { 01:35:31.089 "params": { 01:35:31.089 "name": "Nvme$subsystem", 01:35:31.089 "trtype": "$TEST_TRANSPORT", 01:35:31.089 "traddr": "$NVMF_FIRST_TARGET_IP", 01:35:31.089 "adrfam": "ipv4", 01:35:31.089 "trsvcid": "$NVMF_PORT", 01:35:31.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:35:31.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:35:31.089 "hdgst": ${hdgst:-false}, 01:35:31.089 "ddgst": ${ddgst:-false} 01:35:31.089 }, 01:35:31.089 "method": "bdev_nvme_attach_controller" 01:35:31.089 } 01:35:31.089 EOF 01:35:31.089 )") 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 01:35:31.089 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:35:31.090 "params": { 01:35:31.090 "name": "Nvme1", 01:35:31.090 "trtype": "tcp", 01:35:31.090 "traddr": "10.0.0.3", 01:35:31.090 "adrfam": "ipv4", 01:35:31.090 "trsvcid": "4420", 01:35:31.090 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:35:31.090 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:35:31.090 "hdgst": false, 01:35:31.090 "ddgst": false 01:35:31.090 }, 01:35:31.090 "method": "bdev_nvme_attach_controller" 01:35:31.090 }' 01:35:31.090 [2024-12-09 05:30:22.509688] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:35:31.090 [2024-12-09 05:30:22.509743] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106050 ] 01:35:31.090 [2024-12-09 05:30:22.650188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:35:31.090 [2024-12-09 05:30:22.687591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:35:31.090 [2024-12-09 05:30:22.687754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:35:31.090 [2024-12-09 05:30:22.687864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:35:31.347 I/O targets: 01:35:31.347 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 01:35:31.347 01:35:31.347 01:35:31.347 CUnit - A unit testing framework for C - Version 2.1-3 01:35:31.347 http://cunit.sourceforge.net/ 01:35:31.347 01:35:31.347 01:35:31.347 Suite: bdevio tests on: Nvme1n1 01:35:31.347 Test: blockdev write read block ...passed 01:35:31.347 Test: blockdev write zeroes read block ...passed 01:35:31.347 Test: blockdev write zeroes read no split ...passed 01:35:31.347 Test: blockdev write zeroes read split ...passed 01:35:31.604 Test: blockdev write zeroes read split partial ...passed 01:35:31.605 Test: blockdev reset ...[2024-12-09 05:30:22.969608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:35:31.605 [2024-12-09 05:30:22.969687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b8f50 (9): Bad file descriptor 01:35:31.605 [2024-12-09 05:30:22.972197] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 01:35:31.605 passed 01:35:31.605 Test: blockdev write read 8 blocks ...passed 01:35:31.605 Test: blockdev write read size > 128k ...passed 01:35:31.605 Test: blockdev write read invalid size ...passed 01:35:31.605 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:35:31.605 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:35:31.605 Test: blockdev write read max offset ...passed 01:35:31.605 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:35:31.605 Test: blockdev writev readv 8 blocks ...passed 01:35:31.605 Test: blockdev writev readv 30 x 1block ...passed 01:35:31.605 Test: blockdev writev readv block ...passed 01:35:31.605 Test: blockdev writev readv size > 128k ...passed 01:35:31.605 Test: blockdev writev readv size > 128k in two iovs ...passed 01:35:31.605 Test: blockdev comparev and writev ...[2024-12-09 05:30:23.145043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:35:31.605 [2024-12-09 05:30:23.145066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:35:31.605 [2024-12-09 05:30:23.145076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:35:31.605 [2024-12-09 05:30:23.145081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:35:31.605 [2024-12-09 05:30:23.145328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:35:31.605 [2024-12-09 05:30:23.145335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:35:31.605 [2024-12-09 05:30:23.145343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:35:31.605 [2024-12-09 05:30:23.145348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:35:31.605 [2024-12-09 05:30:23.145620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:35:31.605 [2024-12-09 05:30:23.145627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:35:31.605 [2024-12-09 05:30:23.145636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:35:31.605 [2024-12-09 05:30:23.145640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:35:31.605 [2024-12-09 05:30:23.145883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:35:31.605 [2024-12-09 05:30:23.145889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:35:31.605 [2024-12-09 05:30:23.145898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:35:31.605 [2024-12-09 05:30:23.145903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:35:31.605 passed 01:35:31.863 Test: blockdev nvme passthru rw ...passed 01:35:31.863 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:30:23.230871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:35:31.864 [2024-12-09 05:30:23.230884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:35:31.864 [2024-12-09 05:30:23.230966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:35:31.864 [2024-12-09 05:30:23.230971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:35:31.864 [2024-12-09 05:30:23.231042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:35:31.864 [2024-12-09 05:30:23.231047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:35:31.864 [2024-12-09 05:30:23.231124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:35:31.864 [2024-12-09 05:30:23.231129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:35:31.864 passed 01:35:31.864 Test: blockdev nvme admin passthru ...passed 01:35:31.864 Test: blockdev copy ...passed 01:35:31.864 01:35:31.864 Run Summary: Type Total Ran Passed Failed Inactive 01:35:31.864 suites 1 1 n/a 0 0 01:35:31.864 tests 23 23 23 0 0 01:35:31.864 asserts 152 152 152 0 n/a 01:35:31.864 01:35:31.864 Elapsed time = 0.938 seconds 01:35:31.864 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:35:31.864 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:31.864 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:35:31.864 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:31.864 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 01:35:31.864 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 01:35:31.864 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 01:35:31.864 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 01:35:32.121 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:35:32.121 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 01:35:32.121 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 01:35:32.121 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:35:32.121 rmmod nvme_tcp 01:35:32.121 rmmod nvme_fabrics 01:35:32.121 rmmod nvme_keyring 01:35:32.121 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:35:32.121 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 01:35:32.121 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 01:35:32.121 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 106000 ']' 01:35:32.121 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 106000 01:35:32.121 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 106000 ']' 01:35:32.121 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 106000 01:35:32.121 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 01:35:32.121 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:35:32.121 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106000 01:35:32.121 killing process with pid 106000 01:35:32.122 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 01:35:32.122 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 01:35:32.122 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106000' 01:35:32.122 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 106000 01:35:32.122 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 106000 01:35:32.379 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:35:32.379 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:35:32.379 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:35:32.379 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 01:35:32.379 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 01:35:32.379 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:35:32.379 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 01:35:32.379 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:35:32.379 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:35:32.379 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:35:32.379 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:35:32.379 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:35:32.379 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:35:32.379 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:35:32.379 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:35:32.379 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:35:32.379 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:35:32.379 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:35:32.379 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:35:32.380 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:35:32.380 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:35:32.380 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:35:32.637 05:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 01:35:32.637 05:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:35:32.637 05:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:35:32.637 05:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:35:32.637 05:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 01:35:32.637 01:35:32.637 real 0m3.091s 01:35:32.637 user 0m6.575s 01:35:32.637 sys 0m0.979s 01:35:32.637 05:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 01:35:32.637 ************************************ 01:35:32.637 END TEST nvmf_bdevio 01:35:32.637 05:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:35:32.637 ************************************ 01:35:32.637 05:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 01:35:32.637 01:35:32.637 real 3m22.830s 01:35:32.637 user 9m3.748s 01:35:32.637 sys 1m0.189s 01:35:32.637 05:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 01:35:32.637 05:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:35:32.637 ************************************ 01:35:32.637 END TEST nvmf_target_core_interrupt_mode 01:35:32.637 ************************************ 01:35:32.637 05:30:24 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 01:35:32.637 05:30:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:35:32.637 05:30:24 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 01:35:32.637 05:30:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:35:32.637 ************************************ 01:35:32.637 START TEST nvmf_interrupt 01:35:32.637 ************************************ 01:35:32.637 05:30:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 01:35:32.637 * Looking for test storage... 01:35:32.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:35:32.637 05:30:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:35:32.637 05:30:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:35:32.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:35:32.638 --rc genhtml_branch_coverage=1 01:35:32.638 --rc genhtml_function_coverage=1 01:35:32.638 --rc genhtml_legend=1 01:35:32.638 --rc geninfo_all_blocks=1 01:35:32.638 --rc geninfo_unexecuted_blocks=1 01:35:32.638 01:35:32.638 ' 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:35:32.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:35:32.638 --rc genhtml_branch_coverage=1 01:35:32.638 --rc genhtml_function_coverage=1 01:35:32.638 --rc genhtml_legend=1 01:35:32.638 --rc geninfo_all_blocks=1 01:35:32.638 --rc geninfo_unexecuted_blocks=1 01:35:32.638 01:35:32.638 ' 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:35:32.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:35:32.638 --rc genhtml_branch_coverage=1 01:35:32.638 --rc genhtml_function_coverage=1 01:35:32.638 --rc genhtml_legend=1 01:35:32.638 --rc geninfo_all_blocks=1 01:35:32.638 --rc geninfo_unexecuted_blocks=1 01:35:32.638 01:35:32.638 ' 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:35:32.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:35:32.638 --rc genhtml_branch_coverage=1 01:35:32.638 --rc genhtml_function_coverage=1 01:35:32.638 --rc genhtml_legend=1 01:35:32.638 --rc geninfo_all_blocks=1 01:35:32.638 --rc geninfo_unexecuted_blocks=1 01:35:32.638 01:35:32.638 ' 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:35:32.638 05:30:24 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 01:35:32.639 05:30:24 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:35:32.639 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 01:35:32.639 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:35:32.639 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:35:32.639 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:35:32.639 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:35:32.639 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:35:32.639 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:35:32.639 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:35:32.639 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:35:32.639 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:35:32.639 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@460 -- # nvmf_veth_init 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:35:32.897 Cannot find device "nvmf_init_br" 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # true 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:35:32.897 Cannot find device "nvmf_init_br2" 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # true 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:35:32.897 Cannot find device "nvmf_tgt_br" 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # true 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:35:32.897 Cannot find device "nvmf_tgt_br2" 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # true 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:35:32.897 Cannot find device "nvmf_init_br" 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # true 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:35:32.897 Cannot find device "nvmf_init_br2" 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # true 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:35:32.897 Cannot find device "nvmf_tgt_br" 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # true 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:35:32.897 Cannot find device "nvmf_tgt_br2" 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # true 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:35:32.897 Cannot find device "nvmf_br" 01:35:32.897 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # true 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:35:32.898 Cannot find device "nvmf_init_if" 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # true 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:35:32.898 Cannot find device "nvmf_init_if2" 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # true 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:35:32.898 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # true 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:35:32.898 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # true 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:35:32.898 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:35:33.155 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:35:33.155 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 01:35:33.155 01:35:33.155 --- 10.0.0.3 ping statistics --- 01:35:33.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:35:33.155 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:35:33.155 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:35:33.155 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 01:35:33.155 01:35:33.155 --- 10.0.0.4 ping statistics --- 01:35:33.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:35:33.155 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:35:33.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:35:33.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 01:35:33.155 01:35:33.155 --- 10.0.0.1 ping statistics --- 01:35:33.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:35:33.155 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:35:33.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:35:33.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 01:35:33.155 01:35:33.155 --- 10.0.0.2 ping statistics --- 01:35:33.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:35:33.155 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@461 -- # return 0 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=106295 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 106295 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 106295 ']' 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:35:33.155 05:30:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 01:35:33.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:35:33.156 05:30:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:35:33.156 05:30:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 01:35:33.156 05:30:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:35:33.156 05:30:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 01:35:33.156 [2024-12-09 05:30:24.623936] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:35:33.156 [2024-12-09 05:30:24.624689] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:35:33.156 [2024-12-09 05:30:24.624731] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:35:33.156 [2024-12-09 05:30:24.762909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:35:33.413 [2024-12-09 05:30:24.797321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:35:33.413 [2024-12-09 05:30:24.797358] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:35:33.413 [2024-12-09 05:30:24.797364] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:35:33.413 [2024-12-09 05:30:24.797369] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:35:33.413 [2024-12-09 05:30:24.797374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:35:33.413 [2024-12-09 05:30:24.798245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:35:33.413 [2024-12-09 05:30:24.798329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:35:33.413 [2024-12-09 05:30:24.851321] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:35:33.413 [2024-12-09 05:30:24.851609] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:35:33.413 [2024-12-09 05:30:24.852085] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 01:35:33.976 5000+0 records in 01:35:33.976 5000+0 records out 01:35:33.976 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0196497 s, 521 MB/s 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:35:33.976 AIO0 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:35:33.976 [2024-12-09 05:30:25.551316] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:35:33.976 [2024-12-09 05:30:25.579382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 106295 0 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106295 0 idle 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106295 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106295 -w 256 01:35:33.976 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106295 root 20 0 64.2g 45440 32896 S 0.0 0.4 0:00.17 reactor_0' 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106295 root 20 0 64.2g 45440 32896 S 0.0 0.4 0:00.17 reactor_0 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 106295 1 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106295 1 idle 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106295 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106295 -w 256 01:35:34.234 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106299 root 20 0 64.2g 45440 32896 S 0.0 0.4 0:00.00 reactor_1' 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106299 root 20 0 64.2g 45440 32896 S 0.0 0.4 0:00.00 reactor_1 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=106372 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 106295 0 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 106295 0 busy 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106295 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106295 -w 256 01:35:34.492 05:30:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 01:35:34.492 05:30:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106295 root 20 0 64.2g 45440 32896 S 0.0 0.4 0:00.17 reactor_0' 01:35:34.492 05:30:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106295 root 20 0 64.2g 45440 32896 S 0.0 0.4 0:00.17 reactor_0 01:35:34.492 05:30:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:35:34.492 05:30:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:35:34.492 05:30:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:35:34.492 05:30:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:35:34.492 05:30:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 01:35:34.492 05:30:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 01:35:34.492 05:30:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106295 -w 256 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106295 root 20 0 64.2g 46592 33152 R 99.9 0.4 0:01.61 reactor_0' 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106295 root 20 0 64.2g 46592 33152 R 99.9 0.4 0:01.61 reactor_0 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 106295 1 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 106295 1 busy 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106295 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106295 -w 256 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106299 root 20 0 64.2g 46592 33152 R 66.7 0.4 0:00.82 reactor_1' 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106299 root 20 0 64.2g 46592 33152 R 66.7 0.4 0:00.82 reactor_1 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=66.7 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=66 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:35:35.865 05:30:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 106372 01:35:45.832 Initializing NVMe Controllers 01:35:45.832 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:35:45.832 Controller IO queue size 256, less than required. 01:35:45.832 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:35:45.832 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 01:35:45.832 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 01:35:45.832 Initialization complete. Launching workers. 01:35:45.832 ======================================================== 01:35:45.832 Latency(us) 01:35:45.832 Device Information : IOPS MiB/s Average min max 01:35:45.832 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 9409.90 36.76 27233.64 4188.51 49059.19 01:35:45.832 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 9292.40 36.30 27572.42 6730.74 50142.09 01:35:45.832 ======================================================== 01:35:45.832 Total : 18702.30 73.06 27401.97 4188.51 50142.09 01:35:45.832 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 106295 0 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106295 0 idle 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106295 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106295 -w 256 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106295 root 20 0 64.2g 46592 33152 S 0.0 0.4 0:12.75 reactor_0' 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106295 root 20 0 64.2g 46592 33152 S 0.0 0.4 0:12.75 reactor_0 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 106295 1 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106295 1 idle 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106295 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:35:45.832 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106295 -w 256 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106299 root 20 0 64.2g 46592 33152 S 0.0 0.4 0:06.29 reactor_1' 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106299 root 20 0 64.2g 46592 33152 S 0.0 0.4 0:06.29 reactor_1 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:35:45.833 05:30:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 106295 0 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106295 0 idle 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106295 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106295 -w 256 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106295 root 20 0 64.2g 48512 33152 S 0.0 0.4 0:12.78 reactor_0' 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106295 root 20 0 64.2g 48512 33152 S 0.0 0.4 0:12.78 reactor_0 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 106295 1 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106295 1 idle 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106295 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106295 -w 256 01:35:47.208 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 01:35:47.466 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106299 root 20 0 64.2g 48512 33152 S 0.0 0.4 0:06.31 reactor_1' 01:35:47.466 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106299 root 20 0 64.2g 48512 33152 S 0.0 0.4 0:06.31 reactor_1 01:35:47.466 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:35:47.466 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:35:47.466 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:35:47.466 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:35:47.466 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:35:47.466 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:35:47.466 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:35:47.466 05:30:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:35:47.466 05:30:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:35:47.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:35:47.466 05:30:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:35:47.466 05:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 01:35:47.466 05:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:35:47.466 05:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:35:47.466 05:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:35:47.466 05:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:35:47.466 05:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 01:35:47.466 05:30:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 01:35:47.466 05:30:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 01:35:47.466 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 01:35:47.466 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 01:35:48.031 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:35:48.031 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 01:35:48.031 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 01:35:48.031 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:35:48.031 rmmod nvme_tcp 01:35:48.031 rmmod nvme_fabrics 01:35:48.031 rmmod nvme_keyring 01:35:48.031 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:35:48.031 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 01:35:48.031 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 01:35:48.031 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 106295 ']' 01:35:48.031 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 106295 01:35:48.031 05:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 106295 ']' 01:35:48.031 05:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 106295 01:35:48.031 05:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 01:35:48.031 05:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:35:48.031 05:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106295 01:35:48.031 05:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:35:48.031 killing process with pid 106295 01:35:48.031 05:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:35:48.031 05:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106295' 01:35:48.031 05:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 106295 01:35:48.031 05:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 106295 01:35:48.289 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:35:48.289 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:35:48.289 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:35:48.289 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 01:35:48.289 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:35:48.289 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 01:35:48.289 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 01:35:48.289 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:35:48.289 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:35:48.289 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:35:48.289 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:35:48.289 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:35:48.289 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:35:48.289 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:35:48.289 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:35:48.289 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:35:48.289 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:35:48.289 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:35:48.289 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:35:48.289 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:35:48.289 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:35:48.548 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:35:48.548 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # remove_spdk_ns 01:35:48.548 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:35:48.548 05:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:35:48.548 05:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:35:48.548 05:30:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # return 0 01:35:48.548 01:35:48.548 real 0m15.853s 01:35:48.548 user 0m28.776s 01:35:48.548 sys 0m5.538s 01:35:48.548 05:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 01:35:48.548 05:30:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:35:48.548 ************************************ 01:35:48.548 END TEST nvmf_interrupt 01:35:48.548 ************************************ 01:35:48.548 01:35:48.548 real 18m51.754s 01:35:48.548 user 49m35.152s 01:35:48.548 sys 3m40.541s 01:35:48.548 05:30:39 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 01:35:48.548 05:30:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:35:48.548 ************************************ 01:35:48.548 END TEST nvmf_tcp 01:35:48.548 ************************************ 01:35:48.548 05:30:40 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 01:35:48.549 05:30:40 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 01:35:48.549 05:30:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:35:48.549 05:30:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:35:48.549 05:30:40 -- common/autotest_common.sh@10 -- # set +x 01:35:48.549 ************************************ 01:35:48.549 START TEST spdkcli_nvmf_tcp 01:35:48.549 ************************************ 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 01:35:48.549 * Looking for test storage... 01:35:48.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:35:48.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:35:48.549 --rc genhtml_branch_coverage=1 01:35:48.549 --rc genhtml_function_coverage=1 01:35:48.549 --rc genhtml_legend=1 01:35:48.549 --rc geninfo_all_blocks=1 01:35:48.549 --rc geninfo_unexecuted_blocks=1 01:35:48.549 01:35:48.549 ' 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:35:48.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:35:48.549 --rc genhtml_branch_coverage=1 01:35:48.549 --rc genhtml_function_coverage=1 01:35:48.549 --rc genhtml_legend=1 01:35:48.549 --rc geninfo_all_blocks=1 01:35:48.549 --rc geninfo_unexecuted_blocks=1 01:35:48.549 01:35:48.549 ' 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:35:48.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:35:48.549 --rc genhtml_branch_coverage=1 01:35:48.549 --rc genhtml_function_coverage=1 01:35:48.549 --rc genhtml_legend=1 01:35:48.549 --rc geninfo_all_blocks=1 01:35:48.549 --rc geninfo_unexecuted_blocks=1 01:35:48.549 01:35:48.549 ' 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:35:48.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:35:48.549 --rc genhtml_branch_coverage=1 01:35:48.549 --rc genhtml_function_coverage=1 01:35:48.549 --rc genhtml_legend=1 01:35:48.549 --rc geninfo_all_blocks=1 01:35:48.549 --rc geninfo_unexecuted_blocks=1 01:35:48.549 01:35:48.549 ' 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:35:48.549 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:35:48.808 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=106717 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 106717 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 106717 ']' 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 01:35:48.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:35:48.808 05:30:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 01:35:48.808 [2024-12-09 05:30:40.214557] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:35:48.808 [2024-12-09 05:30:40.214641] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106717 ] 01:35:48.808 [2024-12-09 05:30:40.353649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:35:48.808 [2024-12-09 05:30:40.389532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:35:48.808 [2024-12-09 05:30:40.389540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:35:49.742 05:30:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:35:49.742 05:30:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 01:35:49.742 05:30:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 01:35:49.742 05:30:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 01:35:49.742 05:30:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:35:49.742 05:30:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 01:35:49.742 05:30:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 01:35:49.742 05:30:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 01:35:49.742 05:30:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 01:35:49.742 05:30:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:35:49.742 05:30:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 01:35:49.742 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 01:35:49.742 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 01:35:49.742 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 01:35:49.742 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 01:35:49.742 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 01:35:49.742 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 01:35:49.742 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 01:35:49.742 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 01:35:49.742 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 01:35:49.742 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 01:35:49.742 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:35:49.742 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 01:35:49.742 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 01:35:49.742 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:35:49.742 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 01:35:49.742 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 01:35:49.742 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 01:35:49.742 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 01:35:49.742 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:35:49.742 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 01:35:49.742 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 01:35:49.742 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 01:35:49.742 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 01:35:49.742 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:35:49.742 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 01:35:49.742 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 01:35:49.742 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 01:35:49.742 ' 01:35:52.271 [2024-12-09 05:30:43.679273] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:35:53.688 [2024-12-09 05:30:44.960002] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 01:35:56.216 [2024-12-09 05:30:47.305113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 01:35:58.116 [2024-12-09 05:30:49.318126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 01:35:59.492 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 01:35:59.492 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 01:35:59.492 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 01:35:59.492 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 01:35:59.492 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 01:35:59.492 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 01:35:59.492 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 01:35:59.492 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 01:35:59.492 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 01:35:59.492 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 01:35:59.492 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 01:35:59.492 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 01:35:59.492 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 01:35:59.492 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 01:35:59.492 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 01:35:59.492 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 01:35:59.492 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 01:35:59.492 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 01:35:59.492 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 01:35:59.492 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 01:35:59.492 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 01:35:59.492 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 01:35:59.492 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 01:35:59.492 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 01:35:59.492 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 01:35:59.492 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 01:35:59.492 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 01:35:59.492 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 01:35:59.492 05:30:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 01:35:59.492 05:30:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 01:35:59.492 05:30:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:35:59.492 05:30:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 01:35:59.492 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 01:35:59.492 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:35:59.492 05:30:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 01:35:59.492 05:30:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 01:36:00.059 05:30:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 01:36:00.059 05:30:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 01:36:00.059 05:30:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 01:36:00.059 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 01:36:00.059 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:36:00.059 05:30:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 01:36:00.059 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 01:36:00.059 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:36:00.059 05:30:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 01:36:00.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 01:36:00.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 01:36:00.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 01:36:00.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 01:36:00.059 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 01:36:00.059 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 01:36:00.059 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 01:36:00.059 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 01:36:00.059 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 01:36:00.059 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 01:36:00.059 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 01:36:00.059 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 01:36:00.059 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 01:36:00.059 ' 01:36:05.322 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 01:36:05.322 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 01:36:05.322 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 01:36:05.322 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 01:36:05.322 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 01:36:05.322 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 01:36:05.322 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 01:36:05.322 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 01:36:05.322 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 01:36:05.322 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 01:36:05.322 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 01:36:05.322 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 01:36:05.322 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 01:36:05.322 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 01:36:05.322 05:30:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 01:36:05.322 05:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 01:36:05.322 05:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:36:05.322 05:30:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 106717 01:36:05.322 05:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 106717 ']' 01:36:05.322 05:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 106717 01:36:05.322 05:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 01:36:05.322 05:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:36:05.322 05:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106717 01:36:05.322 killing process with pid 106717 01:36:05.322 05:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:36:05.322 05:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:36:05.322 05:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106717' 01:36:05.322 05:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 106717 01:36:05.322 05:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 106717 01:36:05.581 05:30:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 01:36:05.581 05:30:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 01:36:05.581 05:30:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 106717 ']' 01:36:05.581 05:30:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 106717 01:36:05.581 05:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 106717 ']' 01:36:05.581 05:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 106717 01:36:05.581 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (106717) - No such process 01:36:05.581 Process with pid 106717 is not found 01:36:05.581 05:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 106717 is not found' 01:36:05.581 05:30:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 01:36:05.581 05:30:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 01:36:05.581 05:30:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 01:36:05.581 ************************************ 01:36:05.581 END TEST spdkcli_nvmf_tcp 01:36:05.581 ************************************ 01:36:05.581 01:36:05.581 real 0m16.979s 01:36:05.581 user 0m36.461s 01:36:05.581 sys 0m0.537s 01:36:05.581 05:30:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 01:36:05.581 05:30:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:36:05.581 05:30:57 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 01:36:05.581 05:30:57 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:36:05.581 05:30:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:36:05.581 05:30:57 -- common/autotest_common.sh@10 -- # set +x 01:36:05.581 ************************************ 01:36:05.581 START TEST nvmf_identify_passthru 01:36:05.581 ************************************ 01:36:05.581 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 01:36:05.581 * Looking for test storage... 01:36:05.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:36:05.581 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:36:05.581 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:36:05.581 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 01:36:05.581 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:36:05.581 05:30:57 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:36:05.581 05:30:57 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 01:36:05.581 05:30:57 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 01:36:05.581 05:30:57 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 01:36:05.581 05:30:57 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 01:36:05.581 05:30:57 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 01:36:05.581 05:30:57 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 01:36:05.582 05:30:57 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 01:36:05.582 05:30:57 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 01:36:05.582 05:30:57 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 01:36:05.582 05:30:57 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:36:05.582 05:30:57 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 01:36:05.582 05:30:57 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 01:36:05.582 05:30:57 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 01:36:05.582 05:30:57 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:36:05.582 05:30:57 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 01:36:05.582 05:30:57 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 01:36:05.582 05:30:57 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:36:05.582 05:30:57 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 01:36:05.582 05:30:57 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 01:36:05.582 05:30:57 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 01:36:05.582 05:30:57 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 01:36:05.582 05:30:57 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:36:05.582 05:30:57 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 01:36:05.582 05:30:57 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 01:36:05.582 05:30:57 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:36:05.582 05:30:57 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:36:05.582 05:30:57 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 01:36:05.582 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:36:05.582 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:36:05.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:36:05.582 --rc genhtml_branch_coverage=1 01:36:05.582 --rc genhtml_function_coverage=1 01:36:05.582 --rc genhtml_legend=1 01:36:05.582 --rc geninfo_all_blocks=1 01:36:05.582 --rc geninfo_unexecuted_blocks=1 01:36:05.582 01:36:05.582 ' 01:36:05.582 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:36:05.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:36:05.582 --rc genhtml_branch_coverage=1 01:36:05.582 --rc genhtml_function_coverage=1 01:36:05.582 --rc genhtml_legend=1 01:36:05.582 --rc geninfo_all_blocks=1 01:36:05.582 --rc geninfo_unexecuted_blocks=1 01:36:05.582 01:36:05.582 ' 01:36:05.582 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:36:05.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:36:05.582 --rc genhtml_branch_coverage=1 01:36:05.582 --rc genhtml_function_coverage=1 01:36:05.582 --rc genhtml_legend=1 01:36:05.582 --rc geninfo_all_blocks=1 01:36:05.582 --rc geninfo_unexecuted_blocks=1 01:36:05.582 01:36:05.582 ' 01:36:05.582 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:36:05.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:36:05.582 --rc genhtml_branch_coverage=1 01:36:05.582 --rc genhtml_function_coverage=1 01:36:05.582 --rc genhtml_legend=1 01:36:05.582 --rc geninfo_all_blocks=1 01:36:05.582 --rc geninfo_unexecuted_blocks=1 01:36:05.582 01:36:05.582 ' 01:36:05.582 05:30:57 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:36:05.582 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 01:36:05.582 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:36:05.582 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:36:05.582 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:36:05.582 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:36:05.582 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:36:05.582 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:36:05.582 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:36:05.582 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:36:05.582 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:36:05.582 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:36:05.842 05:30:57 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 01:36:05.842 05:30:57 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:36:05.842 05:30:57 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:36:05.842 05:30:57 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:36:05.842 05:30:57 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:36:05.842 05:30:57 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:36:05.842 05:30:57 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:36:05.842 05:30:57 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 01:36:05.842 05:30:57 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:36:05.842 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 01:36:05.842 05:30:57 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:36:05.842 05:30:57 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 01:36:05.842 05:30:57 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:36:05.842 05:30:57 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:36:05.842 05:30:57 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:36:05.842 05:30:57 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:36:05.842 05:30:57 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:36:05.842 05:30:57 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:36:05.842 05:30:57 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 01:36:05.842 05:30:57 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:36:05.842 05:30:57 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:36:05.842 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:36:05.842 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@460 -- # nvmf_veth_init 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:36:05.842 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:36:05.843 Cannot find device "nvmf_init_br" 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:36:05.843 Cannot find device "nvmf_init_br2" 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:36:05.843 Cannot find device "nvmf_tgt_br" 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@164 -- # true 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:36:05.843 Cannot find device "nvmf_tgt_br2" 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@165 -- # true 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:36:05.843 Cannot find device "nvmf_init_br" 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:36:05.843 Cannot find device "nvmf_init_br2" 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:36:05.843 Cannot find device "nvmf_tgt_br" 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@168 -- # true 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:36:05.843 Cannot find device "nvmf_tgt_br2" 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@169 -- # true 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:36:05.843 Cannot find device "nvmf_br" 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@170 -- # true 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:36:05.843 Cannot find device "nvmf_init_if" 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@171 -- # true 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:36:05.843 Cannot find device "nvmf_init_if2" 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@172 -- # true 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:36:05.843 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@173 -- # true 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:36:05.843 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@174 -- # true 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:36:05.843 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:36:06.102 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:36:06.102 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:36:06.102 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:36:06.102 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:36:06.102 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 01:36:06.102 01:36:06.102 --- 10.0.0.3 ping statistics --- 01:36:06.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:36:06.103 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 01:36:06.103 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:36:06.103 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:36:06.103 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 01:36:06.103 01:36:06.103 --- 10.0.0.4 ping statistics --- 01:36:06.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:36:06.103 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 01:36:06.103 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:36:06.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:36:06.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 01:36:06.103 01:36:06.103 --- 10.0.0.1 ping statistics --- 01:36:06.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:36:06.103 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 01:36:06.103 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:36:06.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:36:06.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 01:36:06.103 01:36:06.103 --- 10.0.0.2 ping statistics --- 01:36:06.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:36:06.103 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 01:36:06.103 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:36:06.103 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@461 -- # return 0 01:36:06.103 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:36:06.103 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:36:06.103 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:36:06.103 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:36:06.103 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:36:06.103 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:36:06.103 05:30:57 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:36:06.103 05:30:57 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 01:36:06.103 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 01:36:06.103 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:36:06.103 05:30:57 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 01:36:06.103 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 01:36:06.103 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 01:36:06.103 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 01:36:06.103 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 01:36:06.103 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 01:36:06.103 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 01:36:06.103 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:36:06.103 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:36:06.103 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:36:06.103 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 01:36:06.103 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:36:06.103 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 01:36:06.103 05:30:57 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 01:36:06.103 05:30:57 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 01:36:06.103 05:30:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 01:36:06.103 05:30:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 01:36:06.103 05:30:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 01:36:06.361 05:30:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 01:36:06.361 05:30:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 01:36:06.361 05:30:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 01:36:06.361 05:30:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 01:36:06.361 05:30:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 01:36:06.361 05:30:57 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 01:36:06.361 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 01:36:06.362 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:36:06.620 05:30:57 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 01:36:06.620 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 01:36:06.620 05:30:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:36:06.620 05:30:58 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=107226 01:36:06.620 05:30:58 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:36:06.620 05:30:58 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 107226 01:36:06.620 05:30:58 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 107226 ']' 01:36:06.620 05:30:58 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 01:36:06.620 05:30:58 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:36:06.620 05:30:58 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 01:36:06.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:36:06.620 05:30:58 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:36:06.620 05:30:58 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 01:36:06.620 05:30:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:36:06.620 [2024-12-09 05:30:58.038574] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:36:06.620 [2024-12-09 05:30:58.038623] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:36:06.620 [2024-12-09 05:30:58.177558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:36:06.620 [2024-12-09 05:30:58.213358] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:36:06.620 [2024-12-09 05:30:58.213527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:36:06.620 [2024-12-09 05:30:58.213608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:36:06.620 [2024-12-09 05:30:58.213703] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:36:06.620 [2024-12-09 05:30:58.213720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:36:06.621 [2024-12-09 05:30:58.214374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:36:06.621 [2024-12-09 05:30:58.214410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:36:06.621 [2024-12-09 05:30:58.214461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:36:06.621 [2024-12-09 05:30:58.214463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:36:07.557 05:30:58 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:36:07.557 05:30:58 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 01:36:07.557 05:30:58 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 01:36:07.557 05:30:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:07.557 05:30:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:36:07.557 05:30:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:07.557 05:30:58 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 01:36:07.557 05:30:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:07.557 05:30:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:36:07.557 [2024-12-09 05:30:58.988764] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 01:36:07.557 05:30:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:07.557 05:30:58 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:36:07.557 05:30:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:07.557 05:30:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:36:07.557 [2024-12-09 05:30:58.998123] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:36:07.557 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:07.557 05:30:59 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 01:36:07.557 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 01:36:07.557 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:36:07.557 05:30:59 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 01:36:07.557 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:07.557 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:36:07.557 Nvme0n1 01:36:07.557 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:07.557 05:30:59 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 01:36:07.557 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:07.557 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:36:07.557 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:07.557 05:30:59 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 01:36:07.557 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:07.557 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:36:07.557 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:07.557 05:30:59 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:36:07.557 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:07.557 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:36:07.557 [2024-12-09 05:30:59.125681] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:36:07.557 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:07.557 05:30:59 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 01:36:07.557 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:07.557 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:36:07.557 [ 01:36:07.557 { 01:36:07.557 "allow_any_host": true, 01:36:07.557 "hosts": [], 01:36:07.557 "listen_addresses": [], 01:36:07.557 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:36:07.557 "subtype": "Discovery" 01:36:07.557 }, 01:36:07.557 { 01:36:07.557 "allow_any_host": true, 01:36:07.557 "hosts": [], 01:36:07.557 "listen_addresses": [ 01:36:07.557 { 01:36:07.557 "adrfam": "IPv4", 01:36:07.557 "traddr": "10.0.0.3", 01:36:07.557 "trsvcid": "4420", 01:36:07.557 "trtype": "TCP" 01:36:07.557 } 01:36:07.557 ], 01:36:07.557 "max_cntlid": 65519, 01:36:07.557 "max_namespaces": 1, 01:36:07.557 "min_cntlid": 1, 01:36:07.557 "model_number": "SPDK bdev Controller", 01:36:07.557 "namespaces": [ 01:36:07.557 { 01:36:07.557 "bdev_name": "Nvme0n1", 01:36:07.557 "name": "Nvme0n1", 01:36:07.557 "nguid": "4097E6259F8444C499313C3CB1B687E1", 01:36:07.557 "nsid": 1, 01:36:07.557 "uuid": "4097e625-9f84-44c4-9931-3c3cb1b687e1" 01:36:07.557 } 01:36:07.557 ], 01:36:07.557 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:36:07.557 "serial_number": "SPDK00000000000001", 01:36:07.557 "subtype": "NVMe" 01:36:07.557 } 01:36:07.557 ] 01:36:07.557 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:07.557 05:30:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 01:36:07.557 05:30:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 01:36:07.557 05:30:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 01:36:08.125 05:30:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 01:36:08.125 05:30:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 01:36:08.125 05:30:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 01:36:08.125 05:30:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 01:36:08.405 05:30:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 01:36:08.405 05:30:59 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 01:36:08.405 05:30:59 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 01:36:08.405 05:30:59 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:36:08.405 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:08.405 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:36:08.405 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:08.405 05:30:59 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 01:36:08.405 05:30:59 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 01:36:08.405 05:30:59 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 01:36:08.405 05:30:59 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 01:36:08.405 05:30:59 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:36:08.405 05:30:59 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 01:36:08.405 05:30:59 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 01:36:08.405 05:30:59 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:36:08.405 rmmod nvme_tcp 01:36:08.405 rmmod nvme_fabrics 01:36:08.405 rmmod nvme_keyring 01:36:08.405 05:30:59 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:36:08.405 05:30:59 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 01:36:08.405 05:30:59 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 01:36:08.405 05:30:59 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 107226 ']' 01:36:08.405 05:30:59 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 107226 01:36:08.405 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 107226 ']' 01:36:08.405 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 107226 01:36:08.405 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 01:36:08.405 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:36:08.405 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107226 01:36:08.405 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:36:08.405 killing process with pid 107226 01:36:08.405 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:36:08.405 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107226' 01:36:08.405 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 107226 01:36:08.405 05:30:59 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 107226 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@246 -- # remove_spdk_ns 01:36:08.664 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:36:08.664 05:31:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:36:08.664 05:31:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:36:08.923 05:31:00 nvmf_identify_passthru -- nvmf/common.sh@300 -- # return 0 01:36:08.923 01:36:08.923 real 0m3.255s 01:36:08.923 user 0m7.857s 01:36:08.923 sys 0m0.701s 01:36:08.923 05:31:00 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 01:36:08.923 ************************************ 01:36:08.923 END TEST nvmf_identify_passthru 01:36:08.923 ************************************ 01:36:08.923 05:31:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:36:08.923 05:31:00 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 01:36:08.923 05:31:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:36:08.923 05:31:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:36:08.923 05:31:00 -- common/autotest_common.sh@10 -- # set +x 01:36:08.923 ************************************ 01:36:08.923 START TEST nvmf_dif 01:36:08.923 ************************************ 01:36:08.923 05:31:00 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 01:36:08.923 * Looking for test storage... 01:36:08.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:36:08.923 05:31:00 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:36:08.923 05:31:00 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 01:36:08.923 05:31:00 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:36:08.923 05:31:00 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:36:08.923 05:31:00 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:36:08.923 05:31:00 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 01:36:08.923 05:31:00 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 01:36:08.923 05:31:00 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 01:36:08.923 05:31:00 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 01:36:08.923 05:31:00 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 01:36:08.923 05:31:00 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 01:36:08.923 05:31:00 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@345 -- # : 1 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@353 -- # local d=1 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@355 -- # echo 1 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@353 -- # local d=2 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@355 -- # echo 2 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@368 -- # return 0 01:36:08.924 05:31:00 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:36:08.924 05:31:00 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:36:08.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:36:08.924 --rc genhtml_branch_coverage=1 01:36:08.924 --rc genhtml_function_coverage=1 01:36:08.924 --rc genhtml_legend=1 01:36:08.924 --rc geninfo_all_blocks=1 01:36:08.924 --rc geninfo_unexecuted_blocks=1 01:36:08.924 01:36:08.924 ' 01:36:08.924 05:31:00 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:36:08.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:36:08.924 --rc genhtml_branch_coverage=1 01:36:08.924 --rc genhtml_function_coverage=1 01:36:08.924 --rc genhtml_legend=1 01:36:08.924 --rc geninfo_all_blocks=1 01:36:08.924 --rc geninfo_unexecuted_blocks=1 01:36:08.924 01:36:08.924 ' 01:36:08.924 05:31:00 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:36:08.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:36:08.924 --rc genhtml_branch_coverage=1 01:36:08.924 --rc genhtml_function_coverage=1 01:36:08.924 --rc genhtml_legend=1 01:36:08.924 --rc geninfo_all_blocks=1 01:36:08.924 --rc geninfo_unexecuted_blocks=1 01:36:08.924 01:36:08.924 ' 01:36:08.924 05:31:00 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:36:08.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:36:08.924 --rc genhtml_branch_coverage=1 01:36:08.924 --rc genhtml_function_coverage=1 01:36:08.924 --rc genhtml_legend=1 01:36:08.924 --rc geninfo_all_blocks=1 01:36:08.924 --rc geninfo_unexecuted_blocks=1 01:36:08.924 01:36:08.924 ' 01:36:08.924 05:31:00 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:36:08.924 05:31:00 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:36:08.924 05:31:00 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:36:08.924 05:31:00 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:36:08.924 05:31:00 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:36:08.924 05:31:00 nvmf_dif -- paths/export.sh@5 -- # export PATH 01:36:08.924 05:31:00 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@51 -- # : 0 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:36:08.924 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 01:36:08.924 05:31:00 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 01:36:08.924 05:31:00 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 01:36:08.924 05:31:00 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 01:36:08.924 05:31:00 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 01:36:08.924 05:31:00 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:36:08.924 05:31:00 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:36:08.924 05:31:00 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:36:08.924 Cannot find device "nvmf_init_br" 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@162 -- # true 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:36:08.924 Cannot find device "nvmf_init_br2" 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@163 -- # true 01:36:08.924 05:31:00 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:36:09.184 Cannot find device "nvmf_tgt_br" 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@164 -- # true 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:36:09.184 Cannot find device "nvmf_tgt_br2" 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@165 -- # true 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:36:09.184 Cannot find device "nvmf_init_br" 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@166 -- # true 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:36:09.184 Cannot find device "nvmf_init_br2" 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@167 -- # true 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:36:09.184 Cannot find device "nvmf_tgt_br" 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@168 -- # true 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:36:09.184 Cannot find device "nvmf_tgt_br2" 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@169 -- # true 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:36:09.184 Cannot find device "nvmf_br" 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@170 -- # true 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:36:09.184 Cannot find device "nvmf_init_if" 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@171 -- # true 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:36:09.184 Cannot find device "nvmf_init_if2" 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@172 -- # true 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:36:09.184 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@173 -- # true 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:36:09.184 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@174 -- # true 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:36:09.184 05:31:00 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:36:09.185 05:31:00 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:36:09.185 05:31:00 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:36:09.185 05:31:00 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:36:09.185 05:31:00 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:36:09.185 05:31:00 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:36:09.185 05:31:00 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:36:09.185 05:31:00 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:36:09.185 05:31:00 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:36:09.185 05:31:00 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:36:09.185 05:31:00 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:36:09.185 05:31:00 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:36:09.185 05:31:00 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:36:09.185 05:31:00 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:36:09.185 05:31:00 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:36:09.185 05:31:00 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:36:09.185 05:31:00 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:36:09.444 05:31:00 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:36:09.444 05:31:00 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:36:09.444 05:31:00 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:36:09.444 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:36:09.444 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.159 ms 01:36:09.444 01:36:09.444 --- 10.0.0.3 ping statistics --- 01:36:09.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:36:09.444 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 01:36:09.444 05:31:00 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:36:09.444 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:36:09.444 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 01:36:09.444 01:36:09.444 --- 10.0.0.4 ping statistics --- 01:36:09.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:36:09.444 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:36:09.444 05:31:00 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:36:09.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:36:09.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 01:36:09.444 01:36:09.444 --- 10.0.0.1 ping statistics --- 01:36:09.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:36:09.444 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 01:36:09.444 05:31:00 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:36:09.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:36:09.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 01:36:09.444 01:36:09.444 --- 10.0.0.2 ping statistics --- 01:36:09.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:36:09.444 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 01:36:09.444 05:31:00 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:36:09.444 05:31:00 nvmf_dif -- nvmf/common.sh@461 -- # return 0 01:36:09.444 05:31:00 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 01:36:09.444 05:31:00 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:36:09.702 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:36:09.702 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:36:09.702 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:36:09.702 05:31:01 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:36:09.703 05:31:01 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:36:09.703 05:31:01 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:36:09.703 05:31:01 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:36:09.703 05:31:01 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:36:09.703 05:31:01 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:36:09.703 05:31:01 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 01:36:09.703 05:31:01 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 01:36:09.703 05:31:01 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:36:09.703 05:31:01 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 01:36:09.703 05:31:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:36:09.703 05:31:01 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=107624 01:36:09.703 05:31:01 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 107624 01:36:09.703 05:31:01 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 107624 ']' 01:36:09.703 05:31:01 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:36:09.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:36:09.703 05:31:01 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 01:36:09.703 05:31:01 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:36:09.703 05:31:01 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:36:09.703 05:31:01 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 01:36:09.703 05:31:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:36:09.703 [2024-12-09 05:31:01.190125] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:36:09.703 [2024-12-09 05:31:01.190176] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:36:09.961 [2024-12-09 05:31:01.328247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:36:09.961 [2024-12-09 05:31:01.363971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:36:09.961 [2024-12-09 05:31:01.364008] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:36:09.961 [2024-12-09 05:31:01.364014] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:36:09.961 [2024-12-09 05:31:01.364019] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:36:09.961 [2024-12-09 05:31:01.364023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:36:09.961 [2024-12-09 05:31:01.364276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:36:10.528 05:31:02 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:36:10.528 05:31:02 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 01:36:10.528 05:31:02 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:36:10.528 05:31:02 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 01:36:10.528 05:31:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:36:10.528 05:31:02 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:36:10.528 05:31:02 nvmf_dif -- target/dif.sh@139 -- # create_transport 01:36:10.528 05:31:02 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 01:36:10.528 05:31:02 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:10.528 05:31:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:36:10.528 [2024-12-09 05:31:02.114541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:36:10.528 05:31:02 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:10.528 05:31:02 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 01:36:10.528 05:31:02 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:36:10.528 05:31:02 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:36:10.528 05:31:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:36:10.528 ************************************ 01:36:10.528 START TEST fio_dif_1_default 01:36:10.528 ************************************ 01:36:10.528 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 01:36:10.528 05:31:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 01:36:10.528 05:31:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 01:36:10.528 05:31:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 01:36:10.528 05:31:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 01:36:10.528 05:31:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 01:36:10.528 05:31:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:36:10.528 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:10.528 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:36:10.528 bdev_null0 01:36:10.528 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:10.528 05:31:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:36:10.528 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:10.528 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:36:10.787 [2024-12-09 05:31:02.158631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:36:10.787 { 01:36:10.787 "params": { 01:36:10.787 "name": "Nvme$subsystem", 01:36:10.787 "trtype": "$TEST_TRANSPORT", 01:36:10.787 "traddr": "$NVMF_FIRST_TARGET_IP", 01:36:10.787 "adrfam": "ipv4", 01:36:10.787 "trsvcid": "$NVMF_PORT", 01:36:10.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:36:10.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:36:10.787 "hdgst": ${hdgst:-false}, 01:36:10.787 "ddgst": ${ddgst:-false} 01:36:10.787 }, 01:36:10.787 "method": "bdev_nvme_attach_controller" 01:36:10.787 } 01:36:10.787 EOF 01:36:10.787 )") 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:36:10.787 "params": { 01:36:10.787 "name": "Nvme0", 01:36:10.787 "trtype": "tcp", 01:36:10.787 "traddr": "10.0.0.3", 01:36:10.787 "adrfam": "ipv4", 01:36:10.787 "trsvcid": "4420", 01:36:10.787 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:36:10.787 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:36:10.787 "hdgst": false, 01:36:10.787 "ddgst": false 01:36:10.787 }, 01:36:10.787 "method": "bdev_nvme_attach_controller" 01:36:10.787 }' 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:36:10.787 05:31:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:36:10.787 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:36:10.787 fio-3.35 01:36:10.787 Starting 1 thread 01:36:22.986 01:36:22.986 filename0: (groupid=0, jobs=1): err= 0: pid=107709: Mon Dec 9 05:31:12 2024 01:36:22.986 read: IOPS=231, BW=927KiB/s (950kB/s)(9280KiB/10007msec) 01:36:22.986 slat (nsec): min=5534, max=31713, avg=6490.70, stdev=2333.22 01:36:22.986 clat (usec): min=290, max=42146, avg=17235.51, stdev=19963.61 01:36:22.986 lat (usec): min=295, max=42154, avg=17242.00, stdev=19963.50 01:36:22.986 clat percentiles (usec): 01:36:22.986 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 310], 01:36:22.986 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 404], 60.00th=[40109], 01:36:22.986 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 01:36:22.986 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 01:36:22.986 | 99.99th=[42206] 01:36:22.986 bw ( KiB/s): min= 480, max= 1728, per=99.85%, avg=926.65, stdev=276.15, samples=20 01:36:22.986 iops : min= 120, max= 432, avg=231.60, stdev=69.04, samples=20 01:36:22.986 lat (usec) : 500=56.55%, 750=1.55% 01:36:22.986 lat (msec) : 10=0.17%, 50=41.72% 01:36:22.986 cpu : usr=93.53%, sys=6.09%, ctx=18, majf=0, minf=9 01:36:22.986 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:36:22.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:22.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:22.986 issued rwts: total=2320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:22.986 latency : target=0, window=0, percentile=100.00%, depth=4 01:36:22.986 01:36:22.986 Run status group 0 (all jobs): 01:36:22.986 READ: bw=927KiB/s (950kB/s), 927KiB/s-927KiB/s (950kB/s-950kB/s), io=9280KiB (9503kB), run=10007-10007msec 01:36:22.986 05:31:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 01:36:22.986 05:31:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 01:36:22.986 05:31:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 01:36:22.986 05:31:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 01:36:22.986 05:31:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:22.987 01:36:22.987 real 0m10.918s 01:36:22.987 user 0m9.926s 01:36:22.987 sys 0m0.803s 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 01:36:22.987 ************************************ 01:36:22.987 END TEST fio_dif_1_default 01:36:22.987 ************************************ 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:36:22.987 05:31:13 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 01:36:22.987 05:31:13 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:36:22.987 05:31:13 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:36:22.987 05:31:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:36:22.987 ************************************ 01:36:22.987 START TEST fio_dif_1_multi_subsystems 01:36:22.987 ************************************ 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:36:22.987 bdev_null0 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:36:22.987 [2024-12-09 05:31:13.114210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:36:22.987 bdev_null1 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:36:22.987 { 01:36:22.987 "params": { 01:36:22.987 "name": "Nvme$subsystem", 01:36:22.987 "trtype": "$TEST_TRANSPORT", 01:36:22.987 "traddr": "$NVMF_FIRST_TARGET_IP", 01:36:22.987 "adrfam": "ipv4", 01:36:22.987 "trsvcid": "$NVMF_PORT", 01:36:22.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:36:22.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:36:22.987 "hdgst": ${hdgst:-false}, 01:36:22.987 "ddgst": ${ddgst:-false} 01:36:22.987 }, 01:36:22.987 "method": "bdev_nvme_attach_controller" 01:36:22.987 } 01:36:22.987 EOF 01:36:22.987 )") 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:36:22.987 { 01:36:22.987 "params": { 01:36:22.987 "name": "Nvme$subsystem", 01:36:22.987 "trtype": "$TEST_TRANSPORT", 01:36:22.987 "traddr": "$NVMF_FIRST_TARGET_IP", 01:36:22.987 "adrfam": "ipv4", 01:36:22.987 "trsvcid": "$NVMF_PORT", 01:36:22.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:36:22.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:36:22.987 "hdgst": ${hdgst:-false}, 01:36:22.987 "ddgst": ${ddgst:-false} 01:36:22.987 }, 01:36:22.987 "method": "bdev_nvme_attach_controller" 01:36:22.987 } 01:36:22.987 EOF 01:36:22.987 )") 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 01:36:22.987 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:36:22.987 "params": { 01:36:22.987 "name": "Nvme0", 01:36:22.987 "trtype": "tcp", 01:36:22.987 "traddr": "10.0.0.3", 01:36:22.987 "adrfam": "ipv4", 01:36:22.987 "trsvcid": "4420", 01:36:22.988 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:36:22.988 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:36:22.988 "hdgst": false, 01:36:22.988 "ddgst": false 01:36:22.988 }, 01:36:22.988 "method": "bdev_nvme_attach_controller" 01:36:22.988 },{ 01:36:22.988 "params": { 01:36:22.988 "name": "Nvme1", 01:36:22.988 "trtype": "tcp", 01:36:22.988 "traddr": "10.0.0.3", 01:36:22.988 "adrfam": "ipv4", 01:36:22.988 "trsvcid": "4420", 01:36:22.988 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:36:22.988 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:36:22.988 "hdgst": false, 01:36:22.988 "ddgst": false 01:36:22.988 }, 01:36:22.988 "method": "bdev_nvme_attach_controller" 01:36:22.988 }' 01:36:22.988 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 01:36:22.988 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:36:22.988 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:36:22.988 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:22.988 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:36:22.988 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:36:22.988 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 01:36:22.988 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:36:22.988 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:36:22.988 05:31:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:36:22.988 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:36:22.988 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:36:22.988 fio-3.35 01:36:22.988 Starting 2 threads 01:36:33.027 01:36:33.027 filename0: (groupid=0, jobs=1): err= 0: pid=107874: Mon Dec 9 05:31:23 2024 01:36:33.027 read: IOPS=149, BW=599KiB/s (613kB/s)(6000KiB/10019msec) 01:36:33.027 slat (nsec): min=5571, max=37278, avg=6412.21, stdev=2238.35 01:36:33.027 clat (usec): min=287, max=41319, avg=26698.07, stdev=19332.28 01:36:33.027 lat (usec): min=292, max=41325, avg=26704.49, stdev=19331.99 01:36:33.027 clat percentiles (usec): 01:36:33.027 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 314], 01:36:33.027 | 30.00th=[ 326], 40.00th=[40633], 50.00th=[40633], 60.00th=[41157], 01:36:33.027 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 01:36:33.027 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 01:36:33.027 | 99.99th=[41157] 01:36:33.027 bw ( KiB/s): min= 448, max= 736, per=48.63%, avg=598.40, stdev=73.49, samples=20 01:36:33.027 iops : min= 112, max= 184, avg=149.60, stdev=18.37, samples=20 01:36:33.027 lat (usec) : 500=34.13%, 750=0.47%, 1000=0.07% 01:36:33.027 lat (msec) : 2=0.27%, 50=65.07% 01:36:33.027 cpu : usr=95.02%, sys=4.65%, ctx=11, majf=0, minf=0 01:36:33.027 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:36:33.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:33.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:33.027 issued rwts: total=1500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:33.027 latency : target=0, window=0, percentile=100.00%, depth=4 01:36:33.027 filename1: (groupid=0, jobs=1): err= 0: pid=107875: Mon Dec 9 05:31:23 2024 01:36:33.027 read: IOPS=157, BW=632KiB/s (647kB/s)(6336KiB/10031msec) 01:36:33.027 slat (nsec): min=5522, max=33535, avg=6849.16, stdev=1940.67 01:36:33.027 clat (usec): min=288, max=41323, avg=25310.98, stdev=19725.23 01:36:33.027 lat (usec): min=294, max=41329, avg=25317.83, stdev=19725.17 01:36:33.027 clat percentiles (usec): 01:36:33.027 | 1.00th=[ 297], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 318], 01:36:33.027 | 30.00th=[ 330], 40.00th=[40109], 50.00th=[40633], 60.00th=[41157], 01:36:33.027 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 01:36:33.027 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 01:36:33.027 | 99.99th=[41157] 01:36:33.027 bw ( KiB/s): min= 416, max= 992, per=51.39%, avg=632.00, stdev=135.32, samples=20 01:36:33.027 iops : min= 104, max= 248, avg=158.00, stdev=33.83, samples=20 01:36:33.027 lat (usec) : 500=37.63%, 750=0.51% 01:36:33.027 lat (msec) : 2=0.25%, 50=61.62% 01:36:33.027 cpu : usr=95.87%, sys=3.78%, ctx=15, majf=0, minf=9 01:36:33.027 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:36:33.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:33.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:33.027 issued rwts: total=1584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:33.027 latency : target=0, window=0, percentile=100.00%, depth=4 01:36:33.027 01:36:33.027 Run status group 0 (all jobs): 01:36:33.027 READ: bw=1230KiB/s (1259kB/s), 599KiB/s-632KiB/s (613kB/s-647kB/s), io=12.0MiB (12.6MB), run=10019-10031msec 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:33.027 01:36:33.027 real 0m11.080s 01:36:33.027 user 0m19.858s 01:36:33.027 sys 0m1.030s 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 01:36:33.027 ************************************ 01:36:33.027 END TEST fio_dif_1_multi_subsystems 01:36:33.027 ************************************ 01:36:33.027 05:31:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:36:33.027 05:31:24 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 01:36:33.027 05:31:24 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:36:33.027 05:31:24 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:36:33.027 05:31:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:36:33.027 ************************************ 01:36:33.027 START TEST fio_dif_rand_params 01:36:33.027 ************************************ 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:33.027 bdev_null0 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:33.027 [2024-12-09 05:31:24.235604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:36:33.027 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:36:33.028 { 01:36:33.028 "params": { 01:36:33.028 "name": "Nvme$subsystem", 01:36:33.028 "trtype": "$TEST_TRANSPORT", 01:36:33.028 "traddr": "$NVMF_FIRST_TARGET_IP", 01:36:33.028 "adrfam": "ipv4", 01:36:33.028 "trsvcid": "$NVMF_PORT", 01:36:33.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:36:33.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:36:33.028 "hdgst": ${hdgst:-false}, 01:36:33.028 "ddgst": ${ddgst:-false} 01:36:33.028 }, 01:36:33.028 "method": "bdev_nvme_attach_controller" 01:36:33.028 } 01:36:33.028 EOF 01:36:33.028 )") 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:36:33.028 "params": { 01:36:33.028 "name": "Nvme0", 01:36:33.028 "trtype": "tcp", 01:36:33.028 "traddr": "10.0.0.3", 01:36:33.028 "adrfam": "ipv4", 01:36:33.028 "trsvcid": "4420", 01:36:33.028 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:36:33.028 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:36:33.028 "hdgst": false, 01:36:33.028 "ddgst": false 01:36:33.028 }, 01:36:33.028 "method": "bdev_nvme_attach_controller" 01:36:33.028 }' 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:36:33.028 05:31:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:36:33.028 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 01:36:33.028 ... 01:36:33.028 fio-3.35 01:36:33.028 Starting 3 threads 01:36:38.317 01:36:38.317 filename0: (groupid=0, jobs=1): err= 0: pid=108025: Mon Dec 9 05:31:29 2024 01:36:38.317 read: IOPS=341, BW=42.7MiB/s (44.8MB/s)(214MiB/5003msec) 01:36:38.317 slat (nsec): min=3995, max=24531, avg=6858.68, stdev=1147.82 01:36:38.317 clat (usec): min=3058, max=50987, avg=8761.22, stdev=8622.02 01:36:38.317 lat (usec): min=3064, max=50996, avg=8768.07, stdev=8622.14 01:36:38.317 clat percentiles (usec): 01:36:38.317 | 1.00th=[ 3163], 5.00th=[ 4817], 10.00th=[ 5276], 20.00th=[ 5604], 01:36:38.317 | 30.00th=[ 5997], 40.00th=[ 6783], 50.00th=[ 7373], 60.00th=[ 7767], 01:36:38.317 | 70.00th=[ 8029], 80.00th=[ 8291], 90.00th=[ 8717], 95.00th=[ 9372], 01:36:38.317 | 99.00th=[48497], 99.50th=[49021], 99.90th=[51119], 99.95th=[51119], 01:36:38.317 | 99.99th=[51119] 01:36:38.317 bw ( KiB/s): min=26880, max=60928, per=35.89%, avg=43750.40, stdev=10605.01, samples=10 01:36:38.317 iops : min= 210, max= 476, avg=341.80, stdev=82.85, samples=10 01:36:38.317 lat (msec) : 4=3.68%, 10=91.76%, 50=4.44%, 100=0.12% 01:36:38.317 cpu : usr=93.68%, sys=5.40%, ctx=8, majf=0, minf=9 01:36:38.317 IO depths : 1=2.6%, 2=97.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:36:38.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:38.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:38.317 issued rwts: total=1711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:38.317 latency : target=0, window=0, percentile=100.00%, depth=3 01:36:38.317 filename0: (groupid=0, jobs=1): err= 0: pid=108026: Mon Dec 9 05:31:29 2024 01:36:38.317 read: IOPS=250, BW=31.4MiB/s (32.9MB/s)(157MiB/5004msec) 01:36:38.317 slat (nsec): min=5417, max=35160, avg=8339.27, stdev=1827.39 01:36:38.317 clat (usec): min=2564, max=51990, avg=11936.74, stdev=11834.95 01:36:38.317 lat (usec): min=2572, max=51999, avg=11945.08, stdev=11834.95 01:36:38.317 clat percentiles (usec): 01:36:38.317 | 1.00th=[ 3163], 5.00th=[ 5080], 10.00th=[ 5669], 20.00th=[ 6063], 01:36:38.317 | 30.00th=[ 8160], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9503], 01:36:38.317 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10945], 95.00th=[49546], 01:36:38.317 | 99.00th=[50594], 99.50th=[51119], 99.90th=[51119], 99.95th=[52167], 01:36:38.317 | 99.99th=[52167] 01:36:38.317 bw ( KiB/s): min=20224, max=41472, per=26.31%, avg=32076.80, stdev=7409.81, samples=10 01:36:38.317 iops : min= 158, max= 324, avg=250.60, stdev=57.89, samples=10 01:36:38.317 lat (msec) : 4=3.74%, 10=74.76%, 20=12.42%, 50=5.89%, 100=3.18% 01:36:38.317 cpu : usr=94.46%, sys=4.30%, ctx=126, majf=0, minf=0 01:36:38.317 IO depths : 1=5.1%, 2=94.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:36:38.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:38.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:38.317 issued rwts: total=1256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:38.317 latency : target=0, window=0, percentile=100.00%, depth=3 01:36:38.317 filename0: (groupid=0, jobs=1): err= 0: pid=108027: Mon Dec 9 05:31:29 2024 01:36:38.317 read: IOPS=359, BW=44.9MiB/s (47.1MB/s)(225MiB/5004msec) 01:36:38.317 slat (nsec): min=5387, max=46704, avg=7368.47, stdev=1915.39 01:36:38.317 clat (usec): min=2712, max=50513, avg=8333.70, stdev=4865.98 01:36:38.317 lat (usec): min=2718, max=50521, avg=8341.06, stdev=4866.10 01:36:38.317 clat percentiles (usec): 01:36:38.318 | 1.00th=[ 3032], 5.00th=[ 3130], 10.00th=[ 3228], 20.00th=[ 6063], 01:36:38.318 | 30.00th=[ 6587], 40.00th=[ 6915], 50.00th=[ 7504], 60.00th=[ 9634], 01:36:38.318 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11469], 95.00th=[11863], 01:36:38.318 | 99.00th=[44303], 99.50th=[47449], 99.90th=[49546], 99.95th=[50594], 01:36:38.318 | 99.99th=[50594] 01:36:38.318 bw ( KiB/s): min=34304, max=60160, per=37.71%, avg=45977.60, stdev=8789.08, samples=10 01:36:38.318 iops : min= 268, max= 470, avg=359.20, stdev=68.66, samples=10 01:36:38.318 lat (msec) : 4=16.45%, 10=46.08%, 20=36.46%, 50=0.94%, 100=0.06% 01:36:38.318 cpu : usr=94.28%, sys=4.68%, ctx=37, majf=0, minf=0 01:36:38.318 IO depths : 1=2.2%, 2=97.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:36:38.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:38.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:38.318 issued rwts: total=1799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:38.318 latency : target=0, window=0, percentile=100.00%, depth=3 01:36:38.318 01:36:38.318 Run status group 0 (all jobs): 01:36:38.318 READ: bw=119MiB/s (125MB/s), 31.4MiB/s-44.9MiB/s (32.9MB/s-47.1MB/s), io=596MiB (625MB), run=5003-5004msec 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:38.579 bdev_null0 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:38.579 [2024-12-09 05:31:30.138385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:38.579 bdev_null1 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:38.579 bdev_null2 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:38.579 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:36:38.840 { 01:36:38.840 "params": { 01:36:38.840 "name": "Nvme$subsystem", 01:36:38.840 "trtype": "$TEST_TRANSPORT", 01:36:38.840 "traddr": "$NVMF_FIRST_TARGET_IP", 01:36:38.840 "adrfam": "ipv4", 01:36:38.840 "trsvcid": "$NVMF_PORT", 01:36:38.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:36:38.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:36:38.840 "hdgst": ${hdgst:-false}, 01:36:38.840 "ddgst": ${ddgst:-false} 01:36:38.840 }, 01:36:38.840 "method": "bdev_nvme_attach_controller" 01:36:38.840 } 01:36:38.840 EOF 01:36:38.840 )") 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:36:38.840 05:31:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:36:38.840 { 01:36:38.840 "params": { 01:36:38.840 "name": "Nvme$subsystem", 01:36:38.840 "trtype": "$TEST_TRANSPORT", 01:36:38.840 "traddr": "$NVMF_FIRST_TARGET_IP", 01:36:38.840 "adrfam": "ipv4", 01:36:38.840 "trsvcid": "$NVMF_PORT", 01:36:38.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:36:38.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:36:38.841 "hdgst": ${hdgst:-false}, 01:36:38.841 "ddgst": ${ddgst:-false} 01:36:38.841 }, 01:36:38.841 "method": "bdev_nvme_attach_controller" 01:36:38.841 } 01:36:38.841 EOF 01:36:38.841 )") 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:36:38.841 { 01:36:38.841 "params": { 01:36:38.841 "name": "Nvme$subsystem", 01:36:38.841 "trtype": "$TEST_TRANSPORT", 01:36:38.841 "traddr": "$NVMF_FIRST_TARGET_IP", 01:36:38.841 "adrfam": "ipv4", 01:36:38.841 "trsvcid": "$NVMF_PORT", 01:36:38.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:36:38.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:36:38.841 "hdgst": ${hdgst:-false}, 01:36:38.841 "ddgst": ${ddgst:-false} 01:36:38.841 }, 01:36:38.841 "method": "bdev_nvme_attach_controller" 01:36:38.841 } 01:36:38.841 EOF 01:36:38.841 )") 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:36:38.841 "params": { 01:36:38.841 "name": "Nvme0", 01:36:38.841 "trtype": "tcp", 01:36:38.841 "traddr": "10.0.0.3", 01:36:38.841 "adrfam": "ipv4", 01:36:38.841 "trsvcid": "4420", 01:36:38.841 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:36:38.841 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:36:38.841 "hdgst": false, 01:36:38.841 "ddgst": false 01:36:38.841 }, 01:36:38.841 "method": "bdev_nvme_attach_controller" 01:36:38.841 },{ 01:36:38.841 "params": { 01:36:38.841 "name": "Nvme1", 01:36:38.841 "trtype": "tcp", 01:36:38.841 "traddr": "10.0.0.3", 01:36:38.841 "adrfam": "ipv4", 01:36:38.841 "trsvcid": "4420", 01:36:38.841 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:36:38.841 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:36:38.841 "hdgst": false, 01:36:38.841 "ddgst": false 01:36:38.841 }, 01:36:38.841 "method": "bdev_nvme_attach_controller" 01:36:38.841 },{ 01:36:38.841 "params": { 01:36:38.841 "name": "Nvme2", 01:36:38.841 "trtype": "tcp", 01:36:38.841 "traddr": "10.0.0.3", 01:36:38.841 "adrfam": "ipv4", 01:36:38.841 "trsvcid": "4420", 01:36:38.841 "subnqn": "nqn.2016-06.io.spdk:cnode2", 01:36:38.841 "hostnqn": "nqn.2016-06.io.spdk:host2", 01:36:38.841 "hdgst": false, 01:36:38.841 "ddgst": false 01:36:38.841 }, 01:36:38.841 "method": "bdev_nvme_attach_controller" 01:36:38.841 }' 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:36:38.841 05:31:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:36:38.841 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:36:38.841 ... 01:36:38.841 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:36:38.841 ... 01:36:38.841 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:36:38.841 ... 01:36:38.841 fio-3.35 01:36:38.841 Starting 24 threads 01:36:51.062 01:36:51.062 filename0: (groupid=0, jobs=1): err= 0: pid=108126: Mon Dec 9 05:31:41 2024 01:36:51.062 read: IOPS=303, BW=1214KiB/s (1243kB/s)(11.9MiB/10006msec) 01:36:51.062 slat (usec): min=3, max=8013, avg=10.47, stdev=145.35 01:36:51.062 clat (msec): min=20, max=104, avg=52.67, stdev=16.89 01:36:51.062 lat (msec): min=20, max=104, avg=52.68, stdev=16.89 01:36:51.062 clat percentiles (msec): 01:36:51.062 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 33], 20.00th=[ 36], 01:36:51.062 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 59], 01:36:51.062 | 70.00th=[ 61], 80.00th=[ 71], 90.00th=[ 73], 95.00th=[ 83], 01:36:51.062 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 105], 99.95th=[ 105], 01:36:51.062 | 99.99th=[ 105] 01:36:51.062 bw ( KiB/s): min= 896, max= 2016, per=4.16%, avg=1208.30, stdev=249.73, samples=20 01:36:51.062 iops : min= 224, max= 504, avg=302.00, stdev=62.42, samples=20 01:36:51.062 lat (msec) : 50=50.59%, 100=49.24%, 250=0.16% 01:36:51.062 cpu : usr=32.78%, sys=0.45%, ctx=852, majf=0, minf=9 01:36:51.062 IO depths : 1=1.1%, 2=2.7%, 4=9.9%, 8=73.9%, 16=12.3%, 32=0.0%, >=64=0.0% 01:36:51.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.062 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.062 issued rwts: total=3036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.062 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.062 filename0: (groupid=0, jobs=1): err= 0: pid=108127: Mon Dec 9 05:31:41 2024 01:36:51.062 read: IOPS=345, BW=1384KiB/s (1417kB/s)(13.6MiB/10032msec) 01:36:51.062 slat (usec): min=3, max=5018, avg=12.35, stdev=145.27 01:36:51.062 clat (msec): min=3, max=107, avg=46.10, stdev=18.93 01:36:51.062 lat (msec): min=3, max=107, avg=46.11, stdev=18.93 01:36:51.062 clat percentiles (msec): 01:36:51.062 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 26], 20.00th=[ 33], 01:36:51.062 | 30.00th=[ 36], 40.00th=[ 41], 50.00th=[ 47], 60.00th=[ 49], 01:36:51.062 | 70.00th=[ 55], 80.00th=[ 61], 90.00th=[ 72], 95.00th=[ 81], 01:36:51.062 | 99.00th=[ 99], 99.50th=[ 104], 99.90th=[ 107], 99.95th=[ 107], 01:36:51.062 | 99.99th=[ 108] 01:36:51.062 bw ( KiB/s): min= 856, max= 3396, per=4.75%, avg=1382.60, stdev=500.89, samples=20 01:36:51.062 iops : min= 214, max= 849, avg=345.65, stdev=125.22, samples=20 01:36:51.062 lat (msec) : 4=0.29%, 10=3.75%, 20=3.75%, 50=54.88%, 100=36.62% 01:36:51.062 lat (msec) : 250=0.72% 01:36:51.062 cpu : usr=43.53%, sys=0.66%, ctx=1477, majf=0, minf=9 01:36:51.062 IO depths : 1=1.0%, 2=2.3%, 4=8.8%, 8=75.5%, 16=12.4%, 32=0.0%, >=64=0.0% 01:36:51.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.062 complete : 0=0.0%, 4=89.9%, 8=5.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.062 issued rwts: total=3471,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.062 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.062 filename0: (groupid=0, jobs=1): err= 0: pid=108128: Mon Dec 9 05:31:41 2024 01:36:51.062 read: IOPS=312, BW=1250KiB/s (1280kB/s)(12.2MiB/10019msec) 01:36:51.062 slat (usec): min=3, max=4016, avg= 9.32, stdev=71.76 01:36:51.062 clat (msec): min=14, max=108, avg=51.09, stdev=17.07 01:36:51.062 lat (msec): min=14, max=108, avg=51.10, stdev=17.07 01:36:51.062 clat percentiles (msec): 01:36:51.062 | 1.00th=[ 17], 5.00th=[ 24], 10.00th=[ 32], 20.00th=[ 38], 01:36:51.062 | 30.00th=[ 42], 40.00th=[ 46], 50.00th=[ 50], 60.00th=[ 55], 01:36:51.062 | 70.00th=[ 58], 80.00th=[ 66], 90.00th=[ 72], 95.00th=[ 82], 01:36:51.062 | 99.00th=[ 99], 99.50th=[ 109], 99.90th=[ 109], 99.95th=[ 109], 01:36:51.062 | 99.99th=[ 109] 01:36:51.062 bw ( KiB/s): min= 896, max= 2096, per=4.30%, avg=1249.05, stdev=236.00, samples=20 01:36:51.062 iops : min= 224, max= 524, avg=312.25, stdev=59.00, samples=20 01:36:51.062 lat (msec) : 20=2.72%, 50=50.03%, 100=46.39%, 250=0.86% 01:36:51.062 cpu : usr=44.09%, sys=0.55%, ctx=1403, majf=0, minf=9 01:36:51.062 IO depths : 1=2.2%, 2=4.9%, 4=14.0%, 8=68.0%, 16=10.9%, 32=0.0%, >=64=0.0% 01:36:51.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.062 complete : 0=0.0%, 4=91.1%, 8=3.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.062 issued rwts: total=3130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.062 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.062 filename0: (groupid=0, jobs=1): err= 0: pid=108129: Mon Dec 9 05:31:41 2024 01:36:51.062 read: IOPS=275, BW=1104KiB/s (1130kB/s)(10.8MiB/10012msec) 01:36:51.062 slat (usec): min=3, max=8013, avg=18.65, stdev=295.45 01:36:51.062 clat (msec): min=23, max=139, avg=57.84, stdev=19.33 01:36:51.062 lat (msec): min=23, max=139, avg=57.86, stdev=19.32 01:36:51.062 clat percentiles (msec): 01:36:51.062 | 1.00th=[ 24], 5.00th=[ 25], 10.00th=[ 36], 20.00th=[ 46], 01:36:51.062 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 58], 60.00th=[ 61], 01:36:51.062 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 92], 01:36:51.062 | 99.00th=[ 112], 99.50th=[ 118], 99.90th=[ 140], 99.95th=[ 140], 01:36:51.062 | 99.99th=[ 140] 01:36:51.062 bw ( KiB/s): min= 872, max= 1768, per=3.79%, avg=1102.50, stdev=215.26, samples=20 01:36:51.062 iops : min= 218, max= 442, avg=275.60, stdev=53.78, samples=20 01:36:51.062 lat (msec) : 50=40.97%, 100=55.85%, 250=3.18% 01:36:51.062 cpu : usr=32.74%, sys=0.47%, ctx=864, majf=0, minf=9 01:36:51.062 IO depths : 1=1.4%, 2=3.3%, 4=12.7%, 8=70.7%, 16=11.9%, 32=0.0%, >=64=0.0% 01:36:51.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.062 complete : 0=0.0%, 4=90.6%, 8=4.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.062 issued rwts: total=2763,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.062 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.063 filename0: (groupid=0, jobs=1): err= 0: pid=108130: Mon Dec 9 05:31:41 2024 01:36:51.063 read: IOPS=326, BW=1307KiB/s (1338kB/s)(12.8MiB/10029msec) 01:36:51.063 slat (usec): min=4, max=8017, avg=10.90, stdev=144.28 01:36:51.063 clat (msec): min=6, max=125, avg=48.89, stdev=18.37 01:36:51.063 lat (msec): min=6, max=125, avg=48.90, stdev=18.37 01:36:51.063 clat percentiles (msec): 01:36:51.063 | 1.00th=[ 10], 5.00th=[ 17], 10.00th=[ 28], 20.00th=[ 35], 01:36:51.063 | 30.00th=[ 38], 40.00th=[ 45], 50.00th=[ 48], 60.00th=[ 51], 01:36:51.063 | 70.00th=[ 59], 80.00th=[ 63], 90.00th=[ 72], 95.00th=[ 83], 01:36:51.063 | 99.00th=[ 96], 99.50th=[ 97], 99.90th=[ 126], 99.95th=[ 126], 01:36:51.063 | 99.99th=[ 126] 01:36:51.063 bw ( KiB/s): min= 936, max= 2664, per=4.49%, avg=1304.40, stdev=365.51, samples=20 01:36:51.063 iops : min= 234, max= 666, avg=326.10, stdev=91.38, samples=20 01:36:51.063 lat (msec) : 10=1.16%, 20=4.94%, 50=53.07%, 100=40.52%, 250=0.31% 01:36:51.063 cpu : usr=37.57%, sys=0.49%, ctx=1124, majf=0, minf=9 01:36:51.063 IO depths : 1=0.4%, 2=1.0%, 4=7.4%, 8=77.7%, 16=13.5%, 32=0.0%, >=64=0.0% 01:36:51.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.063 complete : 0=0.0%, 4=89.5%, 8=6.3%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.063 issued rwts: total=3277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.063 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.063 filename0: (groupid=0, jobs=1): err= 0: pid=108131: Mon Dec 9 05:31:41 2024 01:36:51.063 read: IOPS=330, BW=1320KiB/s (1352kB/s)(12.9MiB/10027msec) 01:36:51.063 slat (usec): min=5, max=11015, avg=10.43, stdev=191.40 01:36:51.063 clat (msec): min=10, max=115, avg=48.43, stdev=16.85 01:36:51.063 lat (msec): min=10, max=115, avg=48.44, stdev=16.85 01:36:51.063 clat percentiles (msec): 01:36:51.063 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 31], 20.00th=[ 34], 01:36:51.063 | 30.00th=[ 37], 40.00th=[ 41], 50.00th=[ 48], 60.00th=[ 51], 01:36:51.063 | 70.00th=[ 57], 80.00th=[ 63], 90.00th=[ 72], 95.00th=[ 82], 01:36:51.063 | 99.00th=[ 96], 99.50th=[ 106], 99.90th=[ 108], 99.95th=[ 116], 01:36:51.063 | 99.99th=[ 116] 01:36:51.063 bw ( KiB/s): min= 1072, max= 2096, per=4.53%, avg=1316.20, stdev=235.51, samples=20 01:36:51.063 iops : min= 268, max= 524, avg=329.05, stdev=58.88, samples=20 01:36:51.063 lat (msec) : 20=0.48%, 50=59.99%, 100=38.98%, 250=0.54% 01:36:51.063 cpu : usr=41.77%, sys=0.55%, ctx=1197, majf=0, minf=9 01:36:51.063 IO depths : 1=0.9%, 2=1.8%, 4=8.3%, 8=76.1%, 16=12.9%, 32=0.0%, >=64=0.0% 01:36:51.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.063 complete : 0=0.0%, 4=89.6%, 8=6.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.063 issued rwts: total=3309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.063 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.063 filename0: (groupid=0, jobs=1): err= 0: pid=108132: Mon Dec 9 05:31:41 2024 01:36:51.063 read: IOPS=352, BW=1410KiB/s (1444kB/s)(13.8MiB/10005msec) 01:36:51.063 slat (usec): min=4, max=8016, avg=12.26, stdev=166.21 01:36:51.063 clat (msec): min=3, max=131, avg=45.29, stdev=19.40 01:36:51.063 lat (msec): min=3, max=131, avg=45.31, stdev=19.40 01:36:51.063 clat percentiles (msec): 01:36:51.063 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 24], 20.00th=[ 33], 01:36:51.063 | 30.00th=[ 35], 40.00th=[ 39], 50.00th=[ 45], 60.00th=[ 48], 01:36:51.063 | 70.00th=[ 52], 80.00th=[ 61], 90.00th=[ 72], 95.00th=[ 81], 01:36:51.063 | 99.00th=[ 96], 99.50th=[ 106], 99.90th=[ 132], 99.95th=[ 132], 01:36:51.063 | 99.99th=[ 132] 01:36:51.063 bw ( KiB/s): min= 896, max= 3328, per=4.90%, avg=1424.84, stdev=517.13, samples=19 01:36:51.063 iops : min= 224, max= 832, avg=356.21, stdev=129.28, samples=19 01:36:51.063 lat (msec) : 4=1.28%, 10=3.26%, 20=3.17%, 50=58.99%, 100=32.71% 01:36:51.063 lat (msec) : 250=0.60% 01:36:51.063 cpu : usr=44.67%, sys=0.57%, ctx=1274, majf=0, minf=0 01:36:51.063 IO depths : 1=1.2%, 2=2.7%, 4=9.8%, 8=74.2%, 16=12.1%, 32=0.0%, >=64=0.0% 01:36:51.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.063 complete : 0=0.0%, 4=90.0%, 8=5.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.063 issued rwts: total=3528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.063 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.063 filename0: (groupid=0, jobs=1): err= 0: pid=108133: Mon Dec 9 05:31:41 2024 01:36:51.063 read: IOPS=282, BW=1131KiB/s (1158kB/s)(11.0MiB/10002msec) 01:36:51.063 slat (usec): min=3, max=8016, avg=10.46, stdev=150.64 01:36:51.063 clat (msec): min=3, max=131, avg=56.49, stdev=18.33 01:36:51.063 lat (msec): min=3, max=131, avg=56.50, stdev=18.33 01:36:51.063 clat percentiles (msec): 01:36:51.063 | 1.00th=[ 17], 5.00th=[ 25], 10.00th=[ 36], 20.00th=[ 46], 01:36:51.063 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 57], 60.00th=[ 61], 01:36:51.063 | 70.00th=[ 63], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 86], 01:36:51.063 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 132], 99.95th=[ 132], 01:36:51.063 | 99.99th=[ 132] 01:36:51.063 bw ( KiB/s): min= 896, max= 1696, per=3.83%, avg=1114.11, stdev=170.25, samples=19 01:36:51.063 iops : min= 224, max= 424, avg=278.53, stdev=42.56, samples=19 01:36:51.063 lat (msec) : 4=0.57%, 10=0.35%, 20=0.39%, 50=42.61%, 100=53.85% 01:36:51.063 lat (msec) : 250=2.23% 01:36:51.063 cpu : usr=32.98%, sys=0.37%, ctx=847, majf=0, minf=9 01:36:51.063 IO depths : 1=1.2%, 2=2.9%, 4=11.5%, 8=72.4%, 16=12.1%, 32=0.0%, >=64=0.0% 01:36:51.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.063 complete : 0=0.0%, 4=90.8%, 8=4.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.063 issued rwts: total=2828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.063 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.063 filename1: (groupid=0, jobs=1): err= 0: pid=108134: Mon Dec 9 05:31:41 2024 01:36:51.063 read: IOPS=266, BW=1065KiB/s (1091kB/s)(10.4MiB/10019msec) 01:36:51.063 slat (nsec): min=3835, max=43599, avg=7661.57, stdev=3659.56 01:36:51.063 clat (msec): min=17, max=131, avg=59.97, stdev=18.82 01:36:51.063 lat (msec): min=17, max=131, avg=59.98, stdev=18.82 01:36:51.063 clat percentiles (msec): 01:36:51.063 | 1.00th=[ 23], 5.00th=[ 25], 10.00th=[ 36], 20.00th=[ 48], 01:36:51.063 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 62], 01:36:51.063 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 96], 01:36:51.063 | 99.00th=[ 108], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 01:36:51.063 | 99.99th=[ 132] 01:36:51.063 bw ( KiB/s): min= 890, max= 1920, per=3.66%, avg=1064.35, stdev=224.79, samples=20 01:36:51.063 iops : min= 222, max= 480, avg=266.05, stdev=56.22, samples=20 01:36:51.063 lat (msec) : 20=0.37%, 50=34.78%, 100=61.51%, 250=3.34% 01:36:51.063 cpu : usr=32.87%, sys=0.49%, ctx=848, majf=0, minf=9 01:36:51.063 IO depths : 1=2.4%, 2=5.8%, 4=16.8%, 8=64.7%, 16=10.3%, 32=0.0%, >=64=0.0% 01:36:51.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.063 complete : 0=0.0%, 4=91.7%, 8=2.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.063 issued rwts: total=2668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.063 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.063 filename1: (groupid=0, jobs=1): err= 0: pid=108135: Mon Dec 9 05:31:41 2024 01:36:51.063 read: IOPS=274, BW=1100KiB/s (1126kB/s)(10.7MiB/10008msec) 01:36:51.063 slat (nsec): min=3052, max=34828, avg=6787.40, stdev=2680.50 01:36:51.063 clat (msec): min=17, max=119, avg=58.16, stdev=16.84 01:36:51.063 lat (msec): min=17, max=120, avg=58.16, stdev=16.84 01:36:51.063 clat percentiles (msec): 01:36:51.063 | 1.00th=[ 29], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 47], 01:36:51.063 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 58], 60.00th=[ 61], 01:36:51.063 | 70.00th=[ 63], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 85], 01:36:51.063 | 99.00th=[ 108], 99.50th=[ 116], 99.90th=[ 121], 99.95th=[ 121], 01:36:51.063 | 99.99th=[ 121] 01:36:51.063 bw ( KiB/s): min= 896, max= 1512, per=3.78%, avg=1098.80, stdev=150.14, samples=20 01:36:51.063 iops : min= 224, max= 378, avg=274.70, stdev=37.54, samples=20 01:36:51.063 lat (msec) : 20=0.15%, 50=39.55%, 100=57.98%, 250=2.33% 01:36:51.063 cpu : usr=32.51%, sys=0.55%, ctx=866, majf=0, minf=9 01:36:51.063 IO depths : 1=0.9%, 2=2.1%, 4=9.0%, 8=74.7%, 16=13.3%, 32=0.0%, >=64=0.0% 01:36:51.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.063 complete : 0=0.0%, 4=90.1%, 8=6.0%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.063 issued rwts: total=2751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.063 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.063 filename1: (groupid=0, jobs=1): err= 0: pid=108136: Mon Dec 9 05:31:41 2024 01:36:51.063 read: IOPS=329, BW=1317KiB/s (1348kB/s)(12.9MiB/10012msec) 01:36:51.063 slat (usec): min=5, max=4012, avg= 9.77, stdev=92.24 01:36:51.063 clat (msec): min=22, max=130, avg=48.55, stdev=16.95 01:36:51.063 lat (msec): min=22, max=130, avg=48.56, stdev=16.96 01:36:51.063 clat percentiles (msec): 01:36:51.063 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 31], 20.00th=[ 35], 01:36:51.063 | 30.00th=[ 37], 40.00th=[ 41], 50.00th=[ 48], 60.00th=[ 50], 01:36:51.063 | 70.00th=[ 56], 80.00th=[ 63], 90.00th=[ 71], 95.00th=[ 83], 01:36:51.063 | 99.00th=[ 97], 99.50th=[ 108], 99.90th=[ 131], 99.95th=[ 131], 01:36:51.063 | 99.99th=[ 131] 01:36:51.063 bw ( KiB/s): min= 976, max= 2016, per=4.51%, avg=1311.65, stdev=243.32, samples=20 01:36:51.063 iops : min= 244, max= 504, avg=327.85, stdev=60.84, samples=20 01:36:51.063 lat (msec) : 50=61.35%, 100=37.86%, 250=0.79% 01:36:51.063 cpu : usr=43.31%, sys=0.49%, ctx=1247, majf=0, minf=9 01:36:51.063 IO depths : 1=1.2%, 2=2.6%, 4=9.8%, 8=74.1%, 16=12.3%, 32=0.0%, >=64=0.0% 01:36:51.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.063 complete : 0=0.0%, 4=90.2%, 8=5.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.063 issued rwts: total=3296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.063 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.063 filename1: (groupid=0, jobs=1): err= 0: pid=108137: Mon Dec 9 05:31:41 2024 01:36:51.063 read: IOPS=291, BW=1167KiB/s (1195kB/s)(11.4MiB/10023msec) 01:36:51.063 slat (usec): min=4, max=8013, avg=15.26, stdev=241.78 01:36:51.063 clat (msec): min=22, max=131, avg=54.78, stdev=17.04 01:36:51.063 lat (msec): min=22, max=131, avg=54.79, stdev=17.04 01:36:51.063 clat percentiles (msec): 01:36:51.063 | 1.00th=[ 24], 5.00th=[ 25], 10.00th=[ 34], 20.00th=[ 40], 01:36:51.063 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 57], 60.00th=[ 60], 01:36:51.063 | 70.00th=[ 61], 80.00th=[ 70], 90.00th=[ 72], 95.00th=[ 85], 01:36:51.063 | 99.00th=[ 96], 99.50th=[ 108], 99.90th=[ 132], 99.95th=[ 132], 01:36:51.063 | 99.99th=[ 132] 01:36:51.064 bw ( KiB/s): min= 896, max= 1984, per=3.99%, avg=1161.95, stdev=222.32, samples=20 01:36:51.064 iops : min= 224, max= 496, avg=290.45, stdev=55.58, samples=20 01:36:51.064 lat (msec) : 50=44.41%, 100=54.98%, 250=0.62% 01:36:51.064 cpu : usr=32.88%, sys=0.45%, ctx=880, majf=0, minf=9 01:36:51.064 IO depths : 1=0.7%, 2=1.6%, 4=8.7%, 8=76.2%, 16=12.8%, 32=0.0%, >=64=0.0% 01:36:51.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.064 complete : 0=0.0%, 4=89.7%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.064 issued rwts: total=2923,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.064 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.064 filename1: (groupid=0, jobs=1): err= 0: pid=108138: Mon Dec 9 05:31:41 2024 01:36:51.064 read: IOPS=383, BW=1533KiB/s (1569kB/s)(15.0MiB/10020msec) 01:36:51.064 slat (usec): min=4, max=4013, avg=10.42, stdev=112.00 01:36:51.064 clat (usec): min=1092, max=96026, avg=41654.91, stdev=16155.51 01:36:51.064 lat (usec): min=1100, max=96033, avg=41665.33, stdev=16155.48 01:36:51.064 clat percentiles (usec): 01:36:51.064 | 1.00th=[ 2671], 5.00th=[12780], 10.00th=[24511], 20.00th=[31851], 01:36:51.064 | 30.00th=[33817], 40.00th=[37487], 50.00th=[40109], 60.00th=[44303], 01:36:51.064 | 70.00th=[47973], 80.00th=[53216], 90.00th=[61080], 95.00th=[70779], 01:36:51.064 | 99.00th=[84411], 99.50th=[94897], 99.90th=[95945], 99.95th=[95945], 01:36:51.064 | 99.99th=[95945] 01:36:51.064 bw ( KiB/s): min= 1200, max= 3689, per=5.27%, avg=1532.05, stdev=529.92, samples=20 01:36:51.064 iops : min= 300, max= 922, avg=383.00, stdev=132.43, samples=20 01:36:51.064 lat (msec) : 2=0.42%, 4=2.11%, 10=2.06%, 20=2.97%, 50=68.87% 01:36:51.064 lat (msec) : 100=23.57% 01:36:51.064 cpu : usr=47.15%, sys=0.73%, ctx=1503, majf=0, minf=9 01:36:51.064 IO depths : 1=0.7%, 2=1.7%, 4=8.9%, 8=75.9%, 16=12.8%, 32=0.0%, >=64=0.0% 01:36:51.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.064 complete : 0=0.0%, 4=89.6%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.064 issued rwts: total=3839,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.064 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.064 filename1: (groupid=0, jobs=1): err= 0: pid=108139: Mon Dec 9 05:31:41 2024 01:36:51.064 read: IOPS=307, BW=1230KiB/s (1260kB/s)(12.0MiB/10015msec) 01:36:51.064 slat (usec): min=3, max=8013, avg=12.83, stdev=170.30 01:36:51.064 clat (msec): min=11, max=125, avg=51.94, stdev=19.18 01:36:51.064 lat (msec): min=11, max=125, avg=51.95, stdev=19.19 01:36:51.064 clat percentiles (msec): 01:36:51.064 | 1.00th=[ 14], 5.00th=[ 21], 10.00th=[ 31], 20.00th=[ 36], 01:36:51.064 | 30.00th=[ 41], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 55], 01:36:51.064 | 70.00th=[ 60], 80.00th=[ 69], 90.00th=[ 80], 95.00th=[ 86], 01:36:51.064 | 99.00th=[ 97], 99.50th=[ 107], 99.90th=[ 126], 99.95th=[ 126], 01:36:51.064 | 99.99th=[ 126] 01:36:51.064 bw ( KiB/s): min= 912, max= 2412, per=4.22%, avg=1226.15, stdev=320.28, samples=20 01:36:51.064 iops : min= 228, max= 603, avg=306.45, stdev=80.05, samples=20 01:36:51.064 lat (msec) : 20=4.55%, 50=44.42%, 100=50.16%, 250=0.88% 01:36:51.064 cpu : usr=43.37%, sys=0.55%, ctx=1354, majf=0, minf=9 01:36:51.064 IO depths : 1=1.7%, 2=3.7%, 4=12.8%, 8=70.0%, 16=11.8%, 32=0.0%, >=64=0.0% 01:36:51.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.064 complete : 0=0.0%, 4=90.9%, 8=4.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.064 issued rwts: total=3080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.064 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.064 filename1: (groupid=0, jobs=1): err= 0: pid=108140: Mon Dec 9 05:31:41 2024 01:36:51.064 read: IOPS=290, BW=1164KiB/s (1192kB/s)(11.4MiB/10019msec) 01:36:51.064 slat (usec): min=3, max=4019, avg= 8.70, stdev=74.37 01:36:51.064 clat (msec): min=17, max=120, avg=54.90, stdev=15.83 01:36:51.064 lat (msec): min=17, max=120, avg=54.91, stdev=15.83 01:36:51.064 clat percentiles (msec): 01:36:51.064 | 1.00th=[ 25], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 42], 01:36:51.064 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 55], 60.00th=[ 57], 01:36:51.064 | 70.00th=[ 61], 80.00th=[ 68], 90.00th=[ 77], 95.00th=[ 84], 01:36:51.064 | 99.00th=[ 97], 99.50th=[ 105], 99.90th=[ 122], 99.95th=[ 122], 01:36:51.064 | 99.99th=[ 122] 01:36:51.064 bw ( KiB/s): min= 896, max= 1632, per=3.99%, avg=1159.90, stdev=170.90, samples=20 01:36:51.064 iops : min= 224, max= 408, avg=289.95, stdev=42.73, samples=20 01:36:51.064 lat (msec) : 20=0.62%, 50=39.35%, 100=59.28%, 250=0.75% 01:36:51.064 cpu : usr=42.89%, sys=0.52%, ctx=1285, majf=0, minf=9 01:36:51.064 IO depths : 1=1.7%, 2=4.3%, 4=13.8%, 8=68.4%, 16=11.8%, 32=0.0%, >=64=0.0% 01:36:51.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.064 complete : 0=0.0%, 4=91.2%, 8=4.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.064 issued rwts: total=2915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.064 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.064 filename1: (groupid=0, jobs=1): err= 0: pid=108141: Mon Dec 9 05:31:41 2024 01:36:51.064 read: IOPS=289, BW=1160KiB/s (1188kB/s)(11.4MiB/10028msec) 01:36:51.064 slat (usec): min=5, max=8013, avg= 9.60, stdev=148.51 01:36:51.064 clat (msec): min=7, max=120, avg=55.06, stdev=17.60 01:36:51.064 lat (msec): min=7, max=120, avg=55.07, stdev=17.60 01:36:51.064 clat percentiles (msec): 01:36:51.064 | 1.00th=[ 9], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 38], 01:36:51.064 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 57], 60.00th=[ 61], 01:36:51.064 | 70.00th=[ 61], 80.00th=[ 72], 90.00th=[ 77], 95.00th=[ 85], 01:36:51.064 | 99.00th=[ 105], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 121], 01:36:51.064 | 99.99th=[ 121] 01:36:51.064 bw ( KiB/s): min= 896, max= 1923, per=3.98%, avg=1156.95, stdev=200.32, samples=20 01:36:51.064 iops : min= 224, max= 480, avg=289.20, stdev=49.93, samples=20 01:36:51.064 lat (msec) : 10=1.10%, 20=0.55%, 50=43.36%, 100=53.92%, 250=1.07% 01:36:51.064 cpu : usr=32.54%, sys=0.49%, ctx=874, majf=0, minf=9 01:36:51.064 IO depths : 1=0.7%, 2=1.6%, 4=8.5%, 8=76.9%, 16=12.3%, 32=0.0%, >=64=0.0% 01:36:51.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.064 complete : 0=0.0%, 4=89.5%, 8=5.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.064 issued rwts: total=2908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.064 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.064 filename2: (groupid=0, jobs=1): err= 0: pid=108142: Mon Dec 9 05:31:41 2024 01:36:51.064 read: IOPS=263, BW=1054KiB/s (1079kB/s)(10.3MiB/10003msec) 01:36:51.064 slat (usec): min=3, max=8013, avg= 9.95, stdev=155.99 01:36:51.064 clat (msec): min=3, max=108, avg=60.68, stdev=18.17 01:36:51.064 lat (msec): min=3, max=108, avg=60.69, stdev=18.17 01:36:51.064 clat percentiles (msec): 01:36:51.064 | 1.00th=[ 16], 5.00th=[ 32], 10.00th=[ 37], 20.00th=[ 48], 01:36:51.064 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 64], 01:36:51.064 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 94], 01:36:51.064 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 109], 99.95th=[ 109], 01:36:51.064 | 99.99th=[ 109] 01:36:51.064 bw ( KiB/s): min= 896, max= 1584, per=3.58%, avg=1042.11, stdev=151.22, samples=19 01:36:51.064 iops : min= 224, max= 396, avg=260.53, stdev=37.81, samples=19 01:36:51.064 lat (msec) : 4=0.61%, 20=1.21%, 50=30.47%, 100=65.50%, 250=2.20% 01:36:51.064 cpu : usr=32.51%, sys=0.53%, ctx=862, majf=0, minf=9 01:36:51.064 IO depths : 1=2.7%, 2=5.8%, 4=15.5%, 8=65.7%, 16=10.2%, 32=0.0%, >=64=0.0% 01:36:51.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.064 complete : 0=0.0%, 4=91.4%, 8=3.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.064 issued rwts: total=2635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.064 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.064 filename2: (groupid=0, jobs=1): err= 0: pid=108143: Mon Dec 9 05:31:41 2024 01:36:51.064 read: IOPS=307, BW=1230KiB/s (1260kB/s)(12.1MiB/10040msec) 01:36:51.064 slat (usec): min=5, max=8015, avg=11.64, stdev=161.17 01:36:51.064 clat (msec): min=3, max=131, avg=51.90, stdev=19.41 01:36:51.064 lat (msec): min=4, max=131, avg=51.91, stdev=19.41 01:36:51.064 clat percentiles (msec): 01:36:51.064 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 28], 20.00th=[ 36], 01:36:51.064 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 57], 01:36:51.064 | 70.00th=[ 61], 80.00th=[ 70], 90.00th=[ 73], 95.00th=[ 84], 01:36:51.064 | 99.00th=[ 99], 99.50th=[ 108], 99.90th=[ 132], 99.95th=[ 132], 01:36:51.064 | 99.99th=[ 132] 01:36:51.064 bw ( KiB/s): min= 1024, max= 2932, per=4.22%, avg=1227.85, stdev=413.20, samples=20 01:36:51.064 iops : min= 256, max= 733, avg=306.95, stdev=103.30, samples=20 01:36:51.064 lat (msec) : 4=0.03%, 10=3.08%, 20=3.79%, 50=43.23%, 100=48.93% 01:36:51.064 lat (msec) : 250=0.94% 01:36:51.064 cpu : usr=36.05%, sys=0.49%, ctx=1017, majf=0, minf=9 01:36:51.064 IO depths : 1=1.2%, 2=2.6%, 4=10.9%, 8=73.3%, 16=12.0%, 32=0.0%, >=64=0.0% 01:36:51.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.064 complete : 0=0.0%, 4=89.6%, 8=5.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.064 issued rwts: total=3088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.064 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.064 filename2: (groupid=0, jobs=1): err= 0: pid=108144: Mon Dec 9 05:31:41 2024 01:36:51.064 read: IOPS=327, BW=1312KiB/s (1343kB/s)(12.8MiB/10028msec) 01:36:51.064 slat (usec): min=3, max=8015, avg=13.85, stdev=186.06 01:36:51.064 clat (msec): min=9, max=126, avg=48.65, stdev=17.92 01:36:51.064 lat (msec): min=9, max=126, avg=48.66, stdev=17.92 01:36:51.064 clat percentiles (msec): 01:36:51.064 | 1.00th=[ 14], 5.00th=[ 23], 10.00th=[ 30], 20.00th=[ 34], 01:36:51.064 | 30.00th=[ 39], 40.00th=[ 43], 50.00th=[ 48], 60.00th=[ 51], 01:36:51.064 | 70.00th=[ 56], 80.00th=[ 62], 90.00th=[ 72], 95.00th=[ 81], 01:36:51.064 | 99.00th=[ 109], 99.50th=[ 112], 99.90th=[ 114], 99.95th=[ 127], 01:36:51.064 | 99.99th=[ 127] 01:36:51.064 bw ( KiB/s): min= 992, max= 2304, per=4.50%, avg=1309.20, stdev=294.24, samples=20 01:36:51.064 iops : min= 248, max= 576, avg=327.30, stdev=73.56, samples=20 01:36:51.064 lat (msec) : 10=0.49%, 20=1.95%, 50=57.86%, 100=37.98%, 250=1.73% 01:36:51.064 cpu : usr=45.75%, sys=0.62%, ctx=1327, majf=0, minf=9 01:36:51.064 IO depths : 1=1.1%, 2=2.3%, 4=9.6%, 8=74.5%, 16=12.5%, 32=0.0%, >=64=0.0% 01:36:51.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.064 complete : 0=0.0%, 4=90.1%, 8=5.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.064 issued rwts: total=3289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.064 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.064 filename2: (groupid=0, jobs=1): err= 0: pid=108145: Mon Dec 9 05:31:41 2024 01:36:51.065 read: IOPS=307, BW=1228KiB/s (1258kB/s)(12.0MiB/10017msec) 01:36:51.065 slat (usec): min=3, max=5013, avg= 9.48, stdev=91.54 01:36:51.065 clat (msec): min=12, max=122, avg=52.03, stdev=18.83 01:36:51.065 lat (msec): min=12, max=122, avg=52.03, stdev=18.83 01:36:51.065 clat percentiles (msec): 01:36:51.065 | 1.00th=[ 20], 5.00th=[ 24], 10.00th=[ 32], 20.00th=[ 36], 01:36:51.065 | 30.00th=[ 41], 40.00th=[ 47], 50.00th=[ 50], 60.00th=[ 55], 01:36:51.065 | 70.00th=[ 60], 80.00th=[ 68], 90.00th=[ 79], 95.00th=[ 91], 01:36:51.065 | 99.00th=[ 104], 99.50th=[ 107], 99.90th=[ 115], 99.95th=[ 115], 01:36:51.065 | 99.99th=[ 123] 01:36:51.065 bw ( KiB/s): min= 896, max= 2072, per=4.22%, avg=1226.30, stdev=288.34, samples=20 01:36:51.065 iops : min= 224, max= 518, avg=306.55, stdev=72.11, samples=20 01:36:51.065 lat (msec) : 20=1.17%, 50=49.80%, 100=47.66%, 250=1.37% 01:36:51.065 cpu : usr=42.47%, sys=0.68%, ctx=1446, majf=0, minf=9 01:36:51.065 IO depths : 1=1.1%, 2=2.3%, 4=8.6%, 8=75.3%, 16=12.6%, 32=0.0%, >=64=0.0% 01:36:51.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.065 complete : 0=0.0%, 4=89.7%, 8=5.9%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.065 issued rwts: total=3076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.065 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.065 filename2: (groupid=0, jobs=1): err= 0: pid=108146: Mon Dec 9 05:31:41 2024 01:36:51.065 read: IOPS=274, BW=1098KiB/s (1124kB/s)(10.7MiB/10014msec) 01:36:51.065 slat (usec): min=3, max=8012, avg=13.70, stdev=187.14 01:36:51.065 clat (msec): min=17, max=126, avg=58.18, stdev=18.21 01:36:51.065 lat (msec): min=17, max=126, avg=58.20, stdev=18.21 01:36:51.065 clat percentiles (msec): 01:36:51.065 | 1.00th=[ 24], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 46], 01:36:51.065 | 30.00th=[ 50], 40.00th=[ 53], 50.00th=[ 56], 60.00th=[ 61], 01:36:51.065 | 70.00th=[ 66], 80.00th=[ 72], 90.00th=[ 82], 95.00th=[ 93], 01:36:51.065 | 99.00th=[ 114], 99.50th=[ 115], 99.90th=[ 127], 99.95th=[ 127], 01:36:51.065 | 99.99th=[ 127] 01:36:51.065 bw ( KiB/s): min= 768, max= 1712, per=3.77%, avg=1095.20, stdev=190.78, samples=20 01:36:51.065 iops : min= 192, max= 428, avg=273.80, stdev=47.70, samples=20 01:36:51.065 lat (msec) : 20=0.36%, 50=31.66%, 100=65.68%, 250=2.29% 01:36:51.065 cpu : usr=41.69%, sys=0.64%, ctx=1461, majf=0, minf=9 01:36:51.065 IO depths : 1=2.5%, 2=5.9%, 4=16.7%, 8=64.4%, 16=10.4%, 32=0.0%, >=64=0.0% 01:36:51.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.065 complete : 0=0.0%, 4=91.7%, 8=3.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.065 issued rwts: total=2748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.065 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.065 filename2: (groupid=0, jobs=1): err= 0: pid=108147: Mon Dec 9 05:31:41 2024 01:36:51.065 read: IOPS=288, BW=1152KiB/s (1180kB/s)(11.3MiB/10021msec) 01:36:51.065 slat (usec): min=3, max=14017, avg=18.79, stdev=328.89 01:36:51.065 clat (msec): min=20, max=108, avg=55.44, stdev=17.09 01:36:51.065 lat (msec): min=20, max=108, avg=55.46, stdev=17.09 01:36:51.065 clat percentiles (msec): 01:36:51.065 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 41], 01:36:51.065 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 54], 60.00th=[ 57], 01:36:51.065 | 70.00th=[ 63], 80.00th=[ 69], 90.00th=[ 81], 95.00th=[ 87], 01:36:51.065 | 99.00th=[ 100], 99.50th=[ 108], 99.90th=[ 109], 99.95th=[ 109], 01:36:51.065 | 99.99th=[ 109] 01:36:51.065 bw ( KiB/s): min= 872, max= 1664, per=3.95%, avg=1148.40, stdev=207.46, samples=20 01:36:51.065 iops : min= 218, max= 416, avg=287.10, stdev=51.86, samples=20 01:36:51.065 lat (msec) : 50=41.43%, 100=57.95%, 250=0.62% 01:36:51.065 cpu : usr=41.68%, sys=0.62%, ctx=1441, majf=0, minf=9 01:36:51.065 IO depths : 1=0.6%, 2=1.7%, 4=8.1%, 8=76.2%, 16=13.4%, 32=0.0%, >=64=0.0% 01:36:51.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.065 complete : 0=0.0%, 4=89.8%, 8=6.1%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.065 issued rwts: total=2887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.065 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.065 filename2: (groupid=0, jobs=1): err= 0: pid=108148: Mon Dec 9 05:31:41 2024 01:36:51.065 read: IOPS=275, BW=1102KiB/s (1129kB/s)(10.8MiB/10003msec) 01:36:51.065 slat (usec): min=3, max=8031, avg=18.34, stdev=275.03 01:36:51.065 clat (msec): min=3, max=131, avg=57.92, stdev=18.17 01:36:51.065 lat (msec): min=3, max=131, avg=57.94, stdev=18.18 01:36:51.065 clat percentiles (msec): 01:36:51.065 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 47], 01:36:51.065 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 57], 60.00th=[ 61], 01:36:51.065 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 95], 01:36:51.065 | 99.00th=[ 108], 99.50th=[ 120], 99.90th=[ 132], 99.95th=[ 132], 01:36:51.065 | 99.99th=[ 132] 01:36:51.065 bw ( KiB/s): min= 848, max= 1563, per=3.76%, avg=1093.21, stdev=161.97, samples=19 01:36:51.065 iops : min= 212, max= 390, avg=273.26, stdev=40.37, samples=19 01:36:51.065 lat (msec) : 4=0.18%, 10=0.40%, 20=0.18%, 50=40.49%, 100=57.00% 01:36:51.065 lat (msec) : 250=1.74% 01:36:51.065 cpu : usr=35.82%, sys=0.46%, ctx=967, majf=0, minf=9 01:36:51.065 IO depths : 1=1.7%, 2=4.4%, 4=14.3%, 8=68.0%, 16=11.5%, 32=0.0%, >=64=0.0% 01:36:51.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.065 complete : 0=0.0%, 4=91.2%, 8=3.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.065 issued rwts: total=2756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.065 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.065 filename2: (groupid=0, jobs=1): err= 0: pid=108149: Mon Dec 9 05:31:41 2024 01:36:51.065 read: IOPS=275, BW=1101KiB/s (1128kB/s)(10.8MiB/10005msec) 01:36:51.065 slat (nsec): min=3858, max=30397, avg=7511.70, stdev=3298.47 01:36:51.065 clat (msec): min=15, max=116, avg=58.05, stdev=18.20 01:36:51.065 lat (msec): min=15, max=116, avg=58.06, stdev=18.20 01:36:51.065 clat percentiles (msec): 01:36:51.065 | 1.00th=[ 24], 5.00th=[ 25], 10.00th=[ 36], 20.00th=[ 47], 01:36:51.065 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 58], 60.00th=[ 61], 01:36:51.065 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 87], 01:36:51.065 | 99.00th=[ 106], 99.50th=[ 107], 99.90th=[ 117], 99.95th=[ 117], 01:36:51.065 | 99.99th=[ 117] 01:36:51.065 bw ( KiB/s): min= 896, max= 1851, per=3.78%, avg=1099.53, stdev=207.92, samples=19 01:36:51.065 iops : min= 224, max= 462, avg=274.84, stdev=51.83, samples=19 01:36:51.065 lat (msec) : 20=0.58%, 50=40.65%, 100=57.28%, 250=1.49% 01:36:51.065 cpu : usr=32.75%, sys=0.48%, ctx=871, majf=0, minf=9 01:36:51.065 IO depths : 1=2.0%, 2=4.8%, 4=14.0%, 8=68.2%, 16=11.0%, 32=0.0%, >=64=0.0% 01:36:51.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.065 complete : 0=0.0%, 4=91.2%, 8=3.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:51.065 issued rwts: total=2755,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:51.065 latency : target=0, window=0, percentile=100.00%, depth=16 01:36:51.065 01:36:51.065 Run status group 0 (all jobs): 01:36:51.065 READ: bw=28.4MiB/s (29.8MB/s), 1054KiB/s-1533KiB/s (1079kB/s-1569kB/s), io=285MiB (299MB), run=10002-10040msec 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 01:36:51.065 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:51.066 bdev_null0 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:51.066 [2024-12-09 05:31:41.432732] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:51.066 bdev_null1 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:36:51.066 { 01:36:51.066 "params": { 01:36:51.066 "name": "Nvme$subsystem", 01:36:51.066 "trtype": "$TEST_TRANSPORT", 01:36:51.066 "traddr": "$NVMF_FIRST_TARGET_IP", 01:36:51.066 "adrfam": "ipv4", 01:36:51.066 "trsvcid": "$NVMF_PORT", 01:36:51.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:36:51.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:36:51.066 "hdgst": ${hdgst:-false}, 01:36:51.066 "ddgst": ${ddgst:-false} 01:36:51.066 }, 01:36:51.066 "method": "bdev_nvme_attach_controller" 01:36:51.066 } 01:36:51.066 EOF 01:36:51.066 )") 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:36:51.066 { 01:36:51.066 "params": { 01:36:51.066 "name": "Nvme$subsystem", 01:36:51.066 "trtype": "$TEST_TRANSPORT", 01:36:51.066 "traddr": "$NVMF_FIRST_TARGET_IP", 01:36:51.066 "adrfam": "ipv4", 01:36:51.066 "trsvcid": "$NVMF_PORT", 01:36:51.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:36:51.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:36:51.066 "hdgst": ${hdgst:-false}, 01:36:51.066 "ddgst": ${ddgst:-false} 01:36:51.066 }, 01:36:51.066 "method": "bdev_nvme_attach_controller" 01:36:51.066 } 01:36:51.066 EOF 01:36:51.066 )") 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:36:51.066 "params": { 01:36:51.066 "name": "Nvme0", 01:36:51.066 "trtype": "tcp", 01:36:51.066 "traddr": "10.0.0.3", 01:36:51.066 "adrfam": "ipv4", 01:36:51.066 "trsvcid": "4420", 01:36:51.066 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:36:51.066 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:36:51.066 "hdgst": false, 01:36:51.066 "ddgst": false 01:36:51.066 }, 01:36:51.066 "method": "bdev_nvme_attach_controller" 01:36:51.066 },{ 01:36:51.066 "params": { 01:36:51.066 "name": "Nvme1", 01:36:51.066 "trtype": "tcp", 01:36:51.066 "traddr": "10.0.0.3", 01:36:51.066 "adrfam": "ipv4", 01:36:51.066 "trsvcid": "4420", 01:36:51.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:36:51.066 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:36:51.066 "hdgst": false, 01:36:51.066 "ddgst": false 01:36:51.066 }, 01:36:51.066 "method": "bdev_nvme_attach_controller" 01:36:51.066 }' 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:36:51.066 05:31:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:36:51.066 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 01:36:51.066 ... 01:36:51.066 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 01:36:51.066 ... 01:36:51.067 fio-3.35 01:36:51.067 Starting 4 threads 01:36:56.376 01:36:56.376 filename0: (groupid=0, jobs=1): err= 0: pid=108286: Mon Dec 9 05:31:47 2024 01:36:56.376 read: IOPS=2737, BW=21.4MiB/s (22.4MB/s)(107MiB/5001msec) 01:36:56.376 slat (nsec): min=3951, max=44220, avg=11131.66, stdev=6646.11 01:36:56.376 clat (usec): min=1828, max=3861, avg=2869.41, stdev=72.87 01:36:56.376 lat (usec): min=1835, max=3874, avg=2880.54, stdev=72.18 01:36:56.376 clat percentiles (usec): 01:36:56.376 | 1.00th=[ 2704], 5.00th=[ 2769], 10.00th=[ 2802], 20.00th=[ 2835], 01:36:56.376 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2868], 60.00th=[ 2900], 01:36:56.376 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2933], 95.00th=[ 2966], 01:36:56.376 | 99.00th=[ 2999], 99.50th=[ 2999], 99.90th=[ 3163], 99.95th=[ 3851], 01:36:56.376 | 99.99th=[ 3851] 01:36:56.376 bw ( KiB/s): min=21632, max=22144, per=25.00%, avg=21902.22, stdev=149.33, samples=9 01:36:56.376 iops : min= 2704, max= 2768, avg=2737.78, stdev=18.67, samples=9 01:36:56.376 lat (msec) : 2=0.06%, 4=99.94% 01:36:56.376 cpu : usr=96.28%, sys=2.92%, ctx=7, majf=0, minf=0 01:36:56.376 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 01:36:56.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:56.376 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:56.376 issued rwts: total=13688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:56.376 latency : target=0, window=0, percentile=100.00%, depth=8 01:36:56.376 filename0: (groupid=0, jobs=1): err= 0: pid=108288: Mon Dec 9 05:31:47 2024 01:36:56.376 read: IOPS=2740, BW=21.4MiB/s (22.4MB/s)(107MiB/5001msec) 01:36:56.376 slat (usec): min=5, max=108, avg= 7.12, stdev= 3.81 01:36:56.376 clat (usec): min=796, max=5465, avg=2884.24, stdev=105.02 01:36:56.376 lat (usec): min=802, max=5472, avg=2891.36, stdev=104.88 01:36:56.376 clat percentiles (usec): 01:36:56.376 | 1.00th=[ 2769], 5.00th=[ 2802], 10.00th=[ 2835], 20.00th=[ 2835], 01:36:56.376 | 30.00th=[ 2868], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2900], 01:36:56.376 | 70.00th=[ 2933], 80.00th=[ 2933], 90.00th=[ 2966], 95.00th=[ 2966], 01:36:56.376 | 99.00th=[ 2999], 99.50th=[ 2999], 99.90th=[ 3195], 99.95th=[ 3261], 01:36:56.376 | 99.99th=[ 4555] 01:36:56.376 bw ( KiB/s): min=21760, max=22016, per=25.03%, avg=21930.67, stdev=110.85, samples=9 01:36:56.376 iops : min= 2720, max= 2752, avg=2741.33, stdev=13.86, samples=9 01:36:56.376 lat (usec) : 1000=0.12% 01:36:56.376 lat (msec) : 2=0.13%, 4=99.74%, 10=0.01% 01:36:56.376 cpu : usr=95.32%, sys=3.68%, ctx=118, majf=0, minf=0 01:36:56.376 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 01:36:56.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:56.376 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:56.376 issued rwts: total=13704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:56.376 latency : target=0, window=0, percentile=100.00%, depth=8 01:36:56.376 filename1: (groupid=0, jobs=1): err= 0: pid=108289: Mon Dec 9 05:31:47 2024 01:36:56.376 read: IOPS=2737, BW=21.4MiB/s (22.4MB/s)(107MiB/5001msec) 01:36:56.376 slat (nsec): min=3888, max=71004, avg=12122.20, stdev=7431.28 01:36:56.376 clat (usec): min=1659, max=3538, avg=2865.53, stdev=75.05 01:36:56.376 lat (usec): min=1665, max=3559, avg=2877.66, stdev=73.90 01:36:56.376 clat percentiles (usec): 01:36:56.376 | 1.00th=[ 2704], 5.00th=[ 2769], 10.00th=[ 2769], 20.00th=[ 2802], 01:36:56.376 | 30.00th=[ 2835], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 01:36:56.376 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2966], 95.00th=[ 2966], 01:36:56.376 | 99.00th=[ 2999], 99.50th=[ 2999], 99.90th=[ 3097], 99.95th=[ 3523], 01:36:56.376 | 99.99th=[ 3523] 01:36:56.377 bw ( KiB/s): min=21632, max=22144, per=25.00%, avg=21902.22, stdev=149.33, samples=9 01:36:56.377 iops : min= 2704, max= 2768, avg=2737.78, stdev=18.67, samples=9 01:36:56.377 lat (msec) : 2=0.06%, 4=99.94% 01:36:56.377 cpu : usr=96.18%, sys=3.00%, ctx=14, majf=0, minf=0 01:36:56.377 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 01:36:56.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:56.377 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:56.377 issued rwts: total=13688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:56.377 latency : target=0, window=0, percentile=100.00%, depth=8 01:36:56.377 filename1: (groupid=0, jobs=1): err= 0: pid=108290: Mon Dec 9 05:31:47 2024 01:36:56.377 read: IOPS=2737, BW=21.4MiB/s (22.4MB/s)(107MiB/5001msec) 01:36:56.377 slat (nsec): min=3982, max=44280, avg=9282.25, stdev=5974.10 01:36:56.377 clat (usec): min=2077, max=3550, avg=2879.44, stdev=63.78 01:36:56.377 lat (usec): min=2081, max=3568, avg=2888.72, stdev=62.31 01:36:56.377 clat percentiles (usec): 01:36:56.377 | 1.00th=[ 2737], 5.00th=[ 2769], 10.00th=[ 2802], 20.00th=[ 2835], 01:36:56.377 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2868], 60.00th=[ 2900], 01:36:56.377 | 70.00th=[ 2933], 80.00th=[ 2933], 90.00th=[ 2966], 95.00th=[ 2966], 01:36:56.377 | 99.00th=[ 2999], 99.50th=[ 2999], 99.90th=[ 3064], 99.95th=[ 3064], 01:36:56.377 | 99.99th=[ 3490] 01:36:56.377 bw ( KiB/s): min=21760, max=22144, per=25.00%, avg=21902.22, stdev=134.92, samples=9 01:36:56.377 iops : min= 2720, max= 2768, avg=2737.78, stdev=16.87, samples=9 01:36:56.377 lat (msec) : 4=100.00% 01:36:56.377 cpu : usr=96.04%, sys=3.08%, ctx=40, majf=0, minf=0 01:36:56.377 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 01:36:56.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:56.377 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:36:56.377 issued rwts: total=13688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:36:56.377 latency : target=0, window=0, percentile=100.00%, depth=8 01:36:56.377 01:36:56.377 Run status group 0 (all jobs): 01:36:56.377 READ: bw=85.6MiB/s (89.7MB/s), 21.4MiB/s-21.4MiB/s (22.4MB/s-22.4MB/s), io=428MiB (449MB), run=5001-5001msec 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:56.377 01:36:56.377 real 0m23.241s 01:36:56.377 user 2m8.400s 01:36:56.377 sys 0m3.209s 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 01:36:56.377 05:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:36:56.377 ************************************ 01:36:56.377 END TEST fio_dif_rand_params 01:36:56.377 ************************************ 01:36:56.377 05:31:47 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 01:36:56.377 05:31:47 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:36:56.377 05:31:47 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:36:56.377 05:31:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:36:56.377 ************************************ 01:36:56.377 START TEST fio_dif_digest 01:36:56.377 ************************************ 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:36:56.377 bdev_null0 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:36:56.377 [2024-12-09 05:31:47.517175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:36:56.377 { 01:36:56.377 "params": { 01:36:56.377 "name": "Nvme$subsystem", 01:36:56.377 "trtype": "$TEST_TRANSPORT", 01:36:56.377 "traddr": "$NVMF_FIRST_TARGET_IP", 01:36:56.377 "adrfam": "ipv4", 01:36:56.377 "trsvcid": "$NVMF_PORT", 01:36:56.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:36:56.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:36:56.377 "hdgst": ${hdgst:-false}, 01:36:56.377 "ddgst": ${ddgst:-false} 01:36:56.377 }, 01:36:56.377 "method": "bdev_nvme_attach_controller" 01:36:56.377 } 01:36:56.377 EOF 01:36:56.377 )") 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 01:36:56.377 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:56.378 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 01:36:56.378 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 01:36:56.378 05:31:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 01:36:56.378 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:36:56.378 05:31:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 01:36:56.378 05:31:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 01:36:56.378 05:31:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:36:56.378 "params": { 01:36:56.378 "name": "Nvme0", 01:36:56.378 "trtype": "tcp", 01:36:56.378 "traddr": "10.0.0.3", 01:36:56.378 "adrfam": "ipv4", 01:36:56.378 "trsvcid": "4420", 01:36:56.378 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:36:56.378 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:36:56.378 "hdgst": true, 01:36:56.378 "ddgst": true 01:36:56.378 }, 01:36:56.378 "method": "bdev_nvme_attach_controller" 01:36:56.378 }' 01:36:56.378 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 01:36:56.378 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:36:56.378 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:36:56.378 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:36:56.378 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:36:56.378 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:36:56.378 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 01:36:56.378 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:36:56.378 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:36:56.378 05:31:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:36:56.378 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 01:36:56.378 ... 01:36:56.378 fio-3.35 01:36:56.378 Starting 3 threads 01:37:08.588 01:37:08.588 filename0: (groupid=0, jobs=1): err= 0: pid=108396: Mon Dec 9 05:31:58 2024 01:37:08.588 read: IOPS=323, BW=40.4MiB/s (42.4MB/s)(405MiB/10004msec) 01:37:08.588 slat (nsec): min=4090, max=25073, avg=8411.73, stdev=1862.28 01:37:08.588 clat (usec): min=4230, max=51017, avg=9259.34, stdev=2799.44 01:37:08.588 lat (usec): min=4239, max=51027, avg=9267.75, stdev=2799.48 01:37:08.588 clat percentiles (usec): 01:37:08.588 | 1.00th=[ 5604], 5.00th=[ 5997], 10.00th=[ 6259], 20.00th=[ 6718], 01:37:08.588 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10028], 01:37:08.588 | 70.00th=[10290], 80.00th=[10552], 90.00th=[11076], 95.00th=[11469], 01:37:08.588 | 99.00th=[12125], 99.50th=[12387], 99.90th=[50070], 99.95th=[51119], 01:37:08.588 | 99.99th=[51119] 01:37:08.588 bw ( KiB/s): min=36608, max=45568, per=37.31%, avg=41013.89, stdev=2685.74, samples=19 01:37:08.588 iops : min= 286, max= 356, avg=320.42, stdev=20.98, samples=19 01:37:08.588 lat (msec) : 10=58.23%, 20=41.49%, 50=0.15%, 100=0.12% 01:37:08.588 cpu : usr=95.64%, sys=3.38%, ctx=49, majf=0, minf=0 01:37:08.588 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:37:08.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:37:08.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:37:08.588 issued rwts: total=3237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:37:08.588 latency : target=0, window=0, percentile=100.00%, depth=3 01:37:08.588 filename0: (groupid=0, jobs=1): err= 0: pid=108397: Mon Dec 9 05:31:58 2024 01:37:08.588 read: IOPS=264, BW=33.0MiB/s (34.6MB/s)(330MiB/10004msec) 01:37:08.588 slat (nsec): min=4029, max=37002, avg=8328.13, stdev=1540.98 01:37:08.588 clat (usec): min=3594, max=15908, avg=11348.21, stdev=2321.10 01:37:08.588 lat (usec): min=3601, max=15916, avg=11356.54, stdev=2321.29 01:37:08.588 clat percentiles (usec): 01:37:08.588 | 1.00th=[ 4948], 5.00th=[ 7504], 10.00th=[ 7767], 20.00th=[ 8291], 01:37:08.588 | 30.00th=[10814], 40.00th=[11994], 50.00th=[12387], 60.00th=[12649], 01:37:08.588 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13435], 95.00th=[13698], 01:37:08.588 | 99.00th=[14091], 99.50th=[14353], 99.90th=[14746], 99.95th=[15664], 01:37:08.588 | 99.99th=[15926] 01:37:08.588 bw ( KiB/s): min=30464, max=36864, per=30.51%, avg=33545.95, stdev=1712.59, samples=19 01:37:08.588 iops : min= 238, max= 288, avg=262.05, stdev=13.39, samples=19 01:37:08.588 lat (msec) : 4=0.87%, 10=27.29%, 20=71.84% 01:37:08.588 cpu : usr=95.00%, sys=3.90%, ctx=7, majf=0, minf=0 01:37:08.588 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:37:08.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:37:08.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:37:08.588 issued rwts: total=2642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:37:08.588 latency : target=0, window=0, percentile=100.00%, depth=3 01:37:08.588 filename0: (groupid=0, jobs=1): err= 0: pid=108398: Mon Dec 9 05:31:58 2024 01:37:08.588 read: IOPS=272, BW=34.0MiB/s (35.7MB/s)(341MiB/10024msec) 01:37:08.588 slat (nsec): min=5978, max=32901, avg=8870.40, stdev=1999.25 01:37:08.588 clat (usec): min=4002, max=51173, avg=11003.56, stdev=9040.40 01:37:08.588 lat (usec): min=4025, max=51182, avg=11012.43, stdev=9040.31 01:37:08.588 clat percentiles (usec): 01:37:08.588 | 1.00th=[ 6390], 5.00th=[ 7832], 10.00th=[ 8160], 20.00th=[ 8455], 01:37:08.588 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9110], 01:37:08.588 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9765], 95.00th=[47973], 01:37:08.588 | 99.00th=[50594], 99.50th=[50594], 99.90th=[51119], 99.95th=[51119], 01:37:08.588 | 99.99th=[51119] 01:37:08.588 bw ( KiB/s): min=22784, max=43008, per=31.75%, avg=34905.60, stdev=4717.68, samples=20 01:37:08.588 iops : min= 178, max= 336, avg=272.70, stdev=36.86, samples=20 01:37:08.588 lat (msec) : 10=92.34%, 20=2.49%, 50=3.22%, 100=1.94% 01:37:08.588 cpu : usr=95.94%, sys=3.17%, ctx=27, majf=0, minf=0 01:37:08.588 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:37:08.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:37:08.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:37:08.588 issued rwts: total=2730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:37:08.588 latency : target=0, window=0, percentile=100.00%, depth=3 01:37:08.588 01:37:08.588 Run status group 0 (all jobs): 01:37:08.588 READ: bw=107MiB/s (113MB/s), 33.0MiB/s-40.4MiB/s (34.6MB/s-42.4MB/s), io=1076MiB (1128MB), run=10004-10024msec 01:37:08.588 05:31:58 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 01:37:08.588 05:31:58 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 01:37:08.588 05:31:58 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 01:37:08.588 05:31:58 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 01:37:08.588 05:31:58 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 01:37:08.588 05:31:58 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:37:08.588 05:31:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:08.588 05:31:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:37:08.588 05:31:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:08.588 05:31:58 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:37:08.588 05:31:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:08.588 05:31:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:37:08.588 05:31:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:08.588 01:37:08.588 real 0m10.869s 01:37:08.588 user 0m29.246s 01:37:08.588 sys 0m1.223s 01:37:08.588 05:31:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 01:37:08.588 ************************************ 01:37:08.588 END TEST fio_dif_digest 01:37:08.588 ************************************ 01:37:08.588 05:31:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:37:08.588 05:31:58 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 01:37:08.588 05:31:58 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 01:37:08.588 05:31:58 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 01:37:08.589 05:31:58 nvmf_dif -- nvmf/common.sh@121 -- # sync 01:37:08.589 05:31:58 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:37:08.589 05:31:58 nvmf_dif -- nvmf/common.sh@124 -- # set +e 01:37:08.589 05:31:58 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 01:37:08.589 05:31:58 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:37:08.589 rmmod nvme_tcp 01:37:08.589 rmmod nvme_fabrics 01:37:08.589 rmmod nvme_keyring 01:37:08.589 05:31:58 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:37:08.589 05:31:58 nvmf_dif -- nvmf/common.sh@128 -- # set -e 01:37:08.589 05:31:58 nvmf_dif -- nvmf/common.sh@129 -- # return 0 01:37:08.589 05:31:58 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 107624 ']' 01:37:08.589 05:31:58 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 107624 01:37:08.589 05:31:58 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 107624 ']' 01:37:08.589 05:31:58 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 107624 01:37:08.589 05:31:58 nvmf_dif -- common/autotest_common.sh@959 -- # uname 01:37:08.589 05:31:58 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:37:08.589 05:31:58 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107624 01:37:08.589 05:31:58 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:37:08.589 05:31:58 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:37:08.589 05:31:58 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107624' 01:37:08.589 killing process with pid 107624 01:37:08.589 05:31:58 nvmf_dif -- common/autotest_common.sh@973 -- # kill 107624 01:37:08.589 05:31:58 nvmf_dif -- common/autotest_common.sh@978 -- # wait 107624 01:37:08.589 05:31:58 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 01:37:08.589 05:31:58 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:37:08.589 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:37:08.589 Waiting for block devices as requested 01:37:08.589 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:37:08.589 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@297 -- # iptr 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:37:08.589 05:31:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:37:08.589 05:31:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:37:08.589 05:31:59 nvmf_dif -- nvmf/common.sh@300 -- # return 0 01:37:08.589 01:37:08.589 real 0m58.909s 01:37:08.589 user 3m55.868s 01:37:08.589 sys 0m10.338s 01:37:08.589 05:31:59 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 01:37:08.589 05:31:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:37:08.589 ************************************ 01:37:08.589 END TEST nvmf_dif 01:37:08.589 ************************************ 01:37:08.589 05:31:59 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 01:37:08.589 05:31:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:37:08.589 05:31:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:37:08.589 05:31:59 -- common/autotest_common.sh@10 -- # set +x 01:37:08.589 ************************************ 01:37:08.589 START TEST nvmf_abort_qd_sizes 01:37:08.589 ************************************ 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 01:37:08.589 * Looking for test storage... 01:37:08.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:37:08.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:37:08.589 --rc genhtml_branch_coverage=1 01:37:08.589 --rc genhtml_function_coverage=1 01:37:08.589 --rc genhtml_legend=1 01:37:08.589 --rc geninfo_all_blocks=1 01:37:08.589 --rc geninfo_unexecuted_blocks=1 01:37:08.589 01:37:08.589 ' 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:37:08.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:37:08.589 --rc genhtml_branch_coverage=1 01:37:08.589 --rc genhtml_function_coverage=1 01:37:08.589 --rc genhtml_legend=1 01:37:08.589 --rc geninfo_all_blocks=1 01:37:08.589 --rc geninfo_unexecuted_blocks=1 01:37:08.589 01:37:08.589 ' 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:37:08.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:37:08.589 --rc genhtml_branch_coverage=1 01:37:08.589 --rc genhtml_function_coverage=1 01:37:08.589 --rc genhtml_legend=1 01:37:08.589 --rc geninfo_all_blocks=1 01:37:08.589 --rc geninfo_unexecuted_blocks=1 01:37:08.589 01:37:08.589 ' 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:37:08.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:37:08.589 --rc genhtml_branch_coverage=1 01:37:08.589 --rc genhtml_function_coverage=1 01:37:08.589 --rc genhtml_legend=1 01:37:08.589 --rc geninfo_all_blocks=1 01:37:08.589 --rc geninfo_unexecuted_blocks=1 01:37:08.589 01:37:08.589 ' 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:37:08.589 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:37:08.590 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:37:08.590 Cannot find device "nvmf_init_br" 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:37:08.590 Cannot find device "nvmf_init_br2" 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:37:08.590 Cannot find device "nvmf_tgt_br" 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:37:08.590 Cannot find device "nvmf_tgt_br2" 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:37:08.590 Cannot find device "nvmf_init_br" 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:37:08.590 Cannot find device "nvmf_init_br2" 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:37:08.590 Cannot find device "nvmf_tgt_br" 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:37:08.590 Cannot find device "nvmf_tgt_br2" 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:37:08.590 Cannot find device "nvmf_br" 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:37:08.590 Cannot find device "nvmf_init_if" 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:37:08.590 Cannot find device "nvmf_init_if2" 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:37:08.590 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:37:08.590 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:37:08.590 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:37:08.591 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:37:08.591 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:37:08.591 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:37:08.591 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:37:08.591 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:37:08.591 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:37:08.591 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:37:08.591 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:37:08.591 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:37:08.591 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 01:37:08.591 01:37:08.591 --- 10.0.0.3 ping statistics --- 01:37:08.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:37:08.591 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 01:37:08.591 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:37:08.591 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:37:08.591 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 01:37:08.591 01:37:08.591 --- 10.0.0.4 ping statistics --- 01:37:08.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:37:08.591 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 01:37:08.591 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:37:08.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:37:08.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 01:37:08.591 01:37:08.591 --- 10.0.0.1 ping statistics --- 01:37:08.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:37:08.591 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 01:37:08.591 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:37:08.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:37:08.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 01:37:08.591 01:37:08.591 --- 10.0.0.2 ping statistics --- 01:37:08.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:37:08.591 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 01:37:08.591 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:37:08.591 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 01:37:08.591 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 01:37:08.591 05:31:59 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:37:08.851 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:37:08.851 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:37:08.851 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:37:08.851 05:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:37:08.851 05:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:37:08.851 05:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:37:08.851 05:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:37:08.851 05:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:37:08.851 05:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:37:08.851 05:32:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 01:37:08.851 05:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:37:08.851 05:32:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 01:37:08.852 05:32:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:37:08.852 05:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=109038 01:37:08.852 05:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 109038 01:37:08.852 05:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 01:37:08.852 05:32:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 109038 ']' 01:37:08.852 05:32:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:37:08.852 05:32:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 01:37:08.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:37:08.852 05:32:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:37:08.852 05:32:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 01:37:08.852 05:32:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:37:08.852 [2024-12-09 05:32:00.434481] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:37:08.852 [2024-12-09 05:32:00.434534] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:37:09.112 [2024-12-09 05:32:00.573655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:37:09.112 [2024-12-09 05:32:00.611210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:37:09.112 [2024-12-09 05:32:00.611249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:37:09.112 [2024-12-09 05:32:00.611256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:37:09.112 [2024-12-09 05:32:00.611262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:37:09.112 [2024-12-09 05:32:00.611266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:37:09.112 [2024-12-09 05:32:00.611964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:37:09.112 [2024-12-09 05:32:00.612027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:37:09.112 [2024-12-09 05:32:00.612105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:37:09.112 [2024-12-09 05:32:00.612093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 01:37:10.055 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 01:37:10.056 05:32:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:37:10.056 ************************************ 01:37:10.056 START TEST spdk_target_abort 01:37:10.056 ************************************ 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:37:10.056 spdk_targetn1 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:37:10.056 [2024-12-09 05:32:01.445891] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:37:10.056 [2024-12-09 05:32:01.483697] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:37:10.056 05:32:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:37:13.345 Initializing NVMe Controllers 01:37:13.345 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 01:37:13.345 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:37:13.345 Initialization complete. Launching workers. 01:37:13.345 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16375, failed: 0 01:37:13.345 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1054, failed to submit 15321 01:37:13.345 success 716, unsuccessful 338, failed 0 01:37:13.345 05:32:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:37:13.345 05:32:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:37:16.645 Initializing NVMe Controllers 01:37:16.645 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 01:37:16.645 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:37:16.645 Initialization complete. Launching workers. 01:37:16.645 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5975, failed: 0 01:37:16.645 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1240, failed to submit 4735 01:37:16.645 success 282, unsuccessful 958, failed 0 01:37:16.645 05:32:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:37:16.645 05:32:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:37:19.932 Initializing NVMe Controllers 01:37:19.932 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 01:37:19.932 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:37:19.932 Initialization complete. Launching workers. 01:37:19.932 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37135, failed: 0 01:37:19.932 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2600, failed to submit 34535 01:37:19.932 success 557, unsuccessful 2043, failed 0 01:37:19.932 05:32:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 01:37:19.932 05:32:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:19.932 05:32:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:37:19.932 05:32:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:19.932 05:32:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 01:37:19.932 05:32:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:19.932 05:32:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:37:21.848 05:32:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:21.848 05:32:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 109038 01:37:21.848 05:32:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 109038 ']' 01:37:21.848 05:32:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 109038 01:37:21.848 05:32:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 01:37:21.848 05:32:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:37:21.848 05:32:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109038 01:37:21.848 killing process with pid 109038 01:37:21.848 05:32:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:37:21.848 05:32:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:37:21.848 05:32:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109038' 01:37:21.848 05:32:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 109038 01:37:21.848 05:32:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 109038 01:37:22.110 ************************************ 01:37:22.110 END TEST spdk_target_abort 01:37:22.110 ************************************ 01:37:22.110 01:37:22.110 real 0m12.132s 01:37:22.110 user 0m49.389s 01:37:22.110 sys 0m1.343s 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:37:22.110 05:32:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 01:37:22.110 05:32:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:37:22.110 05:32:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 01:37:22.110 05:32:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:37:22.110 ************************************ 01:37:22.110 START TEST kernel_target_abort 01:37:22.110 ************************************ 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 01:37:22.110 05:32:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:37:22.372 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:37:22.372 Waiting for block devices as requested 01:37:22.372 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:37:22.372 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:37:22.633 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:37:22.633 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 01:37:22.633 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 01:37:22.633 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:37:22.634 No valid GPT data, bailing 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:37:22.634 No valid GPT data, bailing 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:37:22.634 No valid GPT data, bailing 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:37:22.634 No valid GPT data, bailing 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 01:37:22.634 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 01:37:22.894 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 --hostid=717d4fb6-9228-45ae-9435-1e6ea8cff784 -a 10.0.0.1 -t tcp -s 4420 01:37:22.894 01:37:22.894 Discovery Log Number of Records 2, Generation counter 2 01:37:22.894 =====Discovery Log Entry 0====== 01:37:22.894 trtype: tcp 01:37:22.894 adrfam: ipv4 01:37:22.894 subtype: current discovery subsystem 01:37:22.894 treq: not specified, sq flow control disable supported 01:37:22.894 portid: 1 01:37:22.894 trsvcid: 4420 01:37:22.894 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:37:22.894 traddr: 10.0.0.1 01:37:22.894 eflags: none 01:37:22.894 sectype: none 01:37:22.894 =====Discovery Log Entry 1====== 01:37:22.894 trtype: tcp 01:37:22.894 adrfam: ipv4 01:37:22.894 subtype: nvme subsystem 01:37:22.894 treq: not specified, sq flow control disable supported 01:37:22.894 portid: 1 01:37:22.894 trsvcid: 4420 01:37:22.894 subnqn: nqn.2016-06.io.spdk:testnqn 01:37:22.894 traddr: 10.0.0.1 01:37:22.894 eflags: none 01:37:22.894 sectype: none 01:37:22.894 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 01:37:22.894 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 01:37:22.894 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 01:37:22.894 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 01:37:22.894 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 01:37:22.894 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 01:37:22.894 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 01:37:22.894 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 01:37:22.894 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 01:37:22.894 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:37:22.894 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 01:37:22.894 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:37:22.894 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 01:37:22.894 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:37:22.894 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 01:37:22.894 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:37:22.894 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 01:37:22.894 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:37:22.894 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:37:22.894 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:37:22.894 05:32:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:37:26.195 Initializing NVMe Controllers 01:37:26.195 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:37:26.195 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:37:26.195 Initialization complete. Launching workers. 01:37:26.195 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56256, failed: 0 01:37:26.195 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 56256, failed to submit 0 01:37:26.195 success 0, unsuccessful 56256, failed 0 01:37:26.195 05:32:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:37:26.195 05:32:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:37:29.558 Initializing NVMe Controllers 01:37:29.558 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:37:29.558 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:37:29.558 Initialization complete. Launching workers. 01:37:29.558 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 87005, failed: 0 01:37:29.558 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35569, failed to submit 51436 01:37:29.558 success 0, unsuccessful 35569, failed 0 01:37:29.558 05:32:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:37:29.558 05:32:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:37:32.839 Initializing NVMe Controllers 01:37:32.839 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:37:32.839 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:37:32.839 Initialization complete. Launching workers. 01:37:32.839 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95843, failed: 0 01:37:32.839 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23980, failed to submit 71863 01:37:32.839 success 0, unsuccessful 23980, failed 0 01:37:32.839 05:32:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 01:37:32.839 05:32:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 01:37:32.839 05:32:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 01:37:32.839 05:32:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 01:37:32.839 05:32:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:37:32.839 05:32:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:37:32.839 05:32:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:37:32.839 05:32:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 01:37:32.839 05:32:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 01:37:32.839 05:32:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:37:33.099 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:37:38.374 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:37:38.634 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:37:38.634 01:37:38.634 real 0m16.526s 01:37:38.635 user 0m7.114s 01:37:38.635 sys 0m6.931s 01:37:38.635 05:32:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 01:37:38.635 ************************************ 01:37:38.635 END TEST kernel_target_abort 01:37:38.635 ************************************ 01:37:38.635 05:32:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 01:37:38.635 05:32:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:37:38.635 05:32:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 01:37:38.635 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 01:37:38.635 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 01:37:38.635 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:37:38.635 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 01:37:38.635 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 01:37:38.635 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:37:38.635 rmmod nvme_tcp 01:37:38.635 rmmod nvme_fabrics 01:37:38.635 rmmod nvme_keyring 01:37:38.635 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:37:38.635 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 01:37:38.635 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 01:37:38.635 Process with pid 109038 is not found 01:37:38.635 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 109038 ']' 01:37:38.635 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 109038 01:37:38.635 05:32:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 109038 ']' 01:37:38.635 05:32:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 109038 01:37:38.635 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (109038) - No such process 01:37:38.635 05:32:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 109038 is not found' 01:37:38.635 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 01:37:38.635 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:37:38.896 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:37:39.158 Waiting for block devices as requested 01:37:39.158 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:37:39.158 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:37:39.158 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:37:39.158 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:37:39.158 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 01:37:39.158 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 01:37:39.158 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:37:39.158 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 01:37:39.158 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:37:39.158 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:37:39.158 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:37:39.158 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:37:39.158 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:37:39.420 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:37:39.420 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:37:39.420 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:37:39.420 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:37:39.420 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:37:39.420 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:37:39.420 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:37:39.420 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:37:39.420 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:37:39.420 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:37:39.420 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 01:37:39.420 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:37:39.420 05:32:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:37:39.420 05:32:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:37:39.420 05:32:30 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 01:37:39.420 01:37:39.420 real 0m31.660s 01:37:39.420 user 0m57.508s 01:37:39.420 sys 0m9.382s 01:37:39.420 ************************************ 01:37:39.420 END TEST nvmf_abort_qd_sizes 01:37:39.420 ************************************ 01:37:39.420 05:32:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 01:37:39.420 05:32:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:37:39.420 05:32:30 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 01:37:39.420 05:32:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:37:39.420 05:32:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:37:39.420 05:32:30 -- common/autotest_common.sh@10 -- # set +x 01:37:39.420 ************************************ 01:37:39.420 START TEST keyring_file 01:37:39.420 ************************************ 01:37:39.420 05:32:31 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 01:37:39.683 * Looking for test storage... 01:37:39.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 01:37:39.683 05:32:31 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:37:39.683 05:32:31 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 01:37:39.683 05:32:31 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:37:39.683 05:32:31 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@344 -- # case "$op" in 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@345 -- # : 1 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@365 -- # decimal 1 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@353 -- # local d=1 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@355 -- # echo 1 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@366 -- # decimal 2 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@353 -- # local d=2 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@355 -- # echo 2 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:37:39.683 05:32:31 keyring_file -- scripts/common.sh@368 -- # return 0 01:37:39.683 05:32:31 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:37:39.683 05:32:31 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:37:39.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:37:39.683 --rc genhtml_branch_coverage=1 01:37:39.683 --rc genhtml_function_coverage=1 01:37:39.683 --rc genhtml_legend=1 01:37:39.683 --rc geninfo_all_blocks=1 01:37:39.683 --rc geninfo_unexecuted_blocks=1 01:37:39.683 01:37:39.683 ' 01:37:39.683 05:32:31 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:37:39.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:37:39.683 --rc genhtml_branch_coverage=1 01:37:39.683 --rc genhtml_function_coverage=1 01:37:39.683 --rc genhtml_legend=1 01:37:39.683 --rc geninfo_all_blocks=1 01:37:39.683 --rc geninfo_unexecuted_blocks=1 01:37:39.683 01:37:39.683 ' 01:37:39.683 05:32:31 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:37:39.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:37:39.683 --rc genhtml_branch_coverage=1 01:37:39.683 --rc genhtml_function_coverage=1 01:37:39.683 --rc genhtml_legend=1 01:37:39.683 --rc geninfo_all_blocks=1 01:37:39.683 --rc geninfo_unexecuted_blocks=1 01:37:39.683 01:37:39.683 ' 01:37:39.683 05:32:31 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:37:39.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:37:39.683 --rc genhtml_branch_coverage=1 01:37:39.683 --rc genhtml_function_coverage=1 01:37:39.683 --rc genhtml_legend=1 01:37:39.683 --rc geninfo_all_blocks=1 01:37:39.683 --rc geninfo_unexecuted_blocks=1 01:37:39.683 01:37:39.683 ' 01:37:39.683 05:32:31 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 01:37:39.683 05:32:31 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:37:39.683 05:32:31 keyring_file -- nvmf/common.sh@7 -- # uname -s 01:37:39.683 05:32:31 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:37:39.683 05:32:31 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:37:39.683 05:32:31 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:37:39.683 05:32:31 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:37:39.683 05:32:31 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:37:39.683 05:32:31 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:37:39.683 05:32:31 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:37:39.683 05:32:31 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:37:39.683 05:32:31 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:37:39.683 05:32:31 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:37:39.683 05:32:31 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:37:39.684 05:32:31 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 01:37:39.684 05:32:31 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:37:39.684 05:32:31 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:37:39.684 05:32:31 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:37:39.684 05:32:31 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:37:39.684 05:32:31 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:37:39.684 05:32:31 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:37:39.684 05:32:31 keyring_file -- paths/export.sh@5 -- # export PATH 01:37:39.684 05:32:31 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@51 -- # : 0 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:37:39.684 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 01:37:39.684 05:32:31 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 01:37:39.684 05:32:31 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 01:37:39.684 05:32:31 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 01:37:39.684 05:32:31 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 01:37:39.684 05:32:31 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 01:37:39.684 05:32:31 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 01:37:39.684 05:32:31 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 01:37:39.684 05:32:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:37:39.684 05:32:31 keyring_file -- keyring/common.sh@17 -- # name=key0 01:37:39.684 05:32:31 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:37:39.684 05:32:31 keyring_file -- keyring/common.sh@17 -- # digest=0 01:37:39.684 05:32:31 keyring_file -- keyring/common.sh@18 -- # mktemp 01:37:39.684 05:32:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.wdFhUicW85 01:37:39.684 05:32:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@732 -- # digest=0 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@733 -- # python - 01:37:39.684 05:32:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.wdFhUicW85 01:37:39.684 05:32:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.wdFhUicW85 01:37:39.684 05:32:31 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.wdFhUicW85 01:37:39.684 05:32:31 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 01:37:39.684 05:32:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:37:39.684 05:32:31 keyring_file -- keyring/common.sh@17 -- # name=key1 01:37:39.684 05:32:31 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 01:37:39.684 05:32:31 keyring_file -- keyring/common.sh@17 -- # digest=0 01:37:39.684 05:32:31 keyring_file -- keyring/common.sh@18 -- # mktemp 01:37:39.684 05:32:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pwGg6pA3Nq 01:37:39.684 05:32:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@732 -- # digest=0 01:37:39.684 05:32:31 keyring_file -- nvmf/common.sh@733 -- # python - 01:37:39.684 05:32:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pwGg6pA3Nq 01:37:39.684 05:32:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pwGg6pA3Nq 01:37:39.684 05:32:31 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.pwGg6pA3Nq 01:37:39.684 05:32:31 keyring_file -- keyring/file.sh@30 -- # tgtpid=109979 01:37:39.684 05:32:31 keyring_file -- keyring/file.sh@32 -- # waitforlisten 109979 01:37:39.684 05:32:31 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 109979 ']' 01:37:39.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:37:39.684 05:32:31 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:37:39.684 05:32:31 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 01:37:39.684 05:32:31 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:37:39.684 05:32:31 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 01:37:39.684 05:32:31 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:37:39.684 05:32:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:37:39.945 [2024-12-09 05:32:31.335297] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:37:39.945 [2024-12-09 05:32:31.335595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109979 ] 01:37:39.945 [2024-12-09 05:32:31.483100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:37:39.945 [2024-12-09 05:32:31.521702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:37:40.205 05:32:31 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:37:40.205 05:32:31 keyring_file -- common/autotest_common.sh@868 -- # return 0 01:37:40.205 05:32:31 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 01:37:40.205 05:32:31 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:40.205 05:32:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:37:40.206 [2024-12-09 05:32:31.714229] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:37:40.206 null0 01:37:40.206 [2024-12-09 05:32:31.746212] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:37:40.206 [2024-12-09 05:32:31.746499] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:37:40.206 05:32:31 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:40.206 05:32:31 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:37:40.206 05:32:31 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:37:40.206 05:32:31 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:37:40.206 05:32:31 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:37:40.206 05:32:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:37:40.206 05:32:31 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:37:40.206 05:32:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:37:40.206 05:32:31 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:37:40.206 05:32:31 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:40.206 05:32:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:37:40.206 [2024-12-09 05:32:31.774210] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 01:37:40.206 2024/12/09 05:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 01:37:40.206 request: 01:37:40.206 { 01:37:40.206 "method": "nvmf_subsystem_add_listener", 01:37:40.206 "params": { 01:37:40.206 "nqn": "nqn.2016-06.io.spdk:cnode0", 01:37:40.206 "secure_channel": false, 01:37:40.206 "listen_address": { 01:37:40.206 "trtype": "tcp", 01:37:40.206 "traddr": "127.0.0.1", 01:37:40.206 "trsvcid": "4420" 01:37:40.206 } 01:37:40.206 } 01:37:40.206 } 01:37:40.206 Got JSON-RPC error response 01:37:40.206 GoRPCClient: error on JSON-RPC call 01:37:40.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:37:40.206 05:32:31 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:37:40.206 05:32:31 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:37:40.206 05:32:31 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:37:40.206 05:32:31 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:37:40.206 05:32:31 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:37:40.206 05:32:31 keyring_file -- keyring/file.sh@47 -- # bperfpid=109996 01:37:40.206 05:32:31 keyring_file -- keyring/file.sh@49 -- # waitforlisten 109996 /var/tmp/bperf.sock 01:37:40.206 05:32:31 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 109996 ']' 01:37:40.206 05:32:31 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:37:40.206 05:32:31 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 01:37:40.206 05:32:31 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:37:40.206 05:32:31 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 01:37:40.206 05:32:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:37:40.206 05:32:31 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 01:37:40.206 [2024-12-09 05:32:31.817122] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:37:40.206 [2024-12-09 05:32:31.817378] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109996 ] 01:37:40.465 [2024-12-09 05:32:31.957678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:37:40.465 [2024-12-09 05:32:31.994737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:37:41.438 05:32:32 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:37:41.438 05:32:32 keyring_file -- common/autotest_common.sh@868 -- # return 0 01:37:41.438 05:32:32 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wdFhUicW85 01:37:41.438 05:32:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wdFhUicW85 01:37:41.438 05:32:32 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.pwGg6pA3Nq 01:37:41.438 05:32:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.pwGg6pA3Nq 01:37:41.718 05:32:33 keyring_file -- keyring/file.sh@52 -- # get_key key0 01:37:41.718 05:32:33 keyring_file -- keyring/file.sh@52 -- # jq -r .path 01:37:41.718 05:32:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:37:41.718 05:32:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:37:41.718 05:32:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:37:41.718 05:32:33 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.wdFhUicW85 == \/\t\m\p\/\t\m\p\.\w\d\F\h\U\i\c\W\8\5 ]] 01:37:41.718 05:32:33 keyring_file -- keyring/file.sh@53 -- # get_key key1 01:37:41.978 05:32:33 keyring_file -- keyring/file.sh@53 -- # jq -r .path 01:37:41.978 05:32:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:37:41.978 05:32:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:37:41.978 05:32:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:37:41.978 05:32:33 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.pwGg6pA3Nq == \/\t\m\p\/\t\m\p\.\p\w\G\g\6\p\A\3\N\q ]] 01:37:41.978 05:32:33 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 01:37:41.978 05:32:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:37:41.978 05:32:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:37:41.978 05:32:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:37:41.978 05:32:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:37:41.978 05:32:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:37:42.238 05:32:33 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 01:37:42.238 05:32:33 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 01:37:42.238 05:32:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:37:42.238 05:32:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:37:42.238 05:32:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:37:42.239 05:32:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:37:42.239 05:32:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:37:42.499 05:32:33 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 01:37:42.499 05:32:33 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:37:42.499 05:32:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:37:42.760 [2024-12-09 05:32:34.160794] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:37:42.760 nvme0n1 01:37:42.760 05:32:34 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 01:37:42.760 05:32:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:37:42.760 05:32:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:37:42.760 05:32:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:37:42.760 05:32:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:37:42.760 05:32:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:37:43.020 05:32:34 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 01:37:43.020 05:32:34 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 01:37:43.020 05:32:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:37:43.020 05:32:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:37:43.020 05:32:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:37:43.020 05:32:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:37:43.020 05:32:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:37:43.279 05:32:34 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 01:37:43.279 05:32:34 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:37:43.279 Running I/O for 1 seconds... 01:37:44.215 16612.00 IOPS, 64.89 MiB/s 01:37:44.215 Latency(us) 01:37:44.215 [2024-12-09T05:32:35.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:37:44.215 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 01:37:44.215 nvme0n1 : 1.01 16647.01 65.03 0.00 0.00 7667.48 3705.30 12855.14 01:37:44.215 [2024-12-09T05:32:35.832Z] =================================================================================================================== 01:37:44.215 [2024-12-09T05:32:35.832Z] Total : 16647.01 65.03 0.00 0.00 7667.48 3705.30 12855.14 01:37:44.215 { 01:37:44.215 "results": [ 01:37:44.215 { 01:37:44.215 "job": "nvme0n1", 01:37:44.215 "core_mask": "0x2", 01:37:44.215 "workload": "randrw", 01:37:44.215 "percentage": 50, 01:37:44.215 "status": "finished", 01:37:44.215 "queue_depth": 128, 01:37:44.215 "io_size": 4096, 01:37:44.215 "runtime": 1.005706, 01:37:44.215 "iops": 16647.012148679634, 01:37:44.215 "mibps": 65.02739120577982, 01:37:44.215 "io_failed": 0, 01:37:44.215 "io_timeout": 0, 01:37:44.215 "avg_latency_us": 7667.478035341794, 01:37:44.215 "min_latency_us": 3705.3046153846153, 01:37:44.215 "max_latency_us": 12855.138461538461 01:37:44.215 } 01:37:44.215 ], 01:37:44.215 "core_count": 1 01:37:44.215 } 01:37:44.215 05:32:35 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:37:44.215 05:32:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:37:44.475 05:32:35 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 01:37:44.475 05:32:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:37:44.475 05:32:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:37:44.475 05:32:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:37:44.475 05:32:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:37:44.475 05:32:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:37:44.734 05:32:36 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 01:37:44.735 05:32:36 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 01:37:44.735 05:32:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:37:44.735 05:32:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:37:44.735 05:32:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:37:44.735 05:32:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:37:44.735 05:32:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:37:44.994 05:32:36 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 01:37:44.994 05:32:36 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:37:44.994 05:32:36 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:37:44.994 05:32:36 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:37:44.994 05:32:36 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:37:44.994 05:32:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:37:44.994 05:32:36 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:37:44.994 05:32:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:37:44.994 05:32:36 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:37:44.994 05:32:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:37:45.255 [2024-12-09 05:32:36.609984] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:37:45.255 [2024-12-09 05:32:36.610014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6c530 (107): Transport endpoint is not connected 01:37:45.255 [2024-12-09 05:32:36.611005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6c530 (9): Bad file descriptor 01:37:45.255 [2024-12-09 05:32:36.612004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 01:37:45.255 [2024-12-09 05:32:36.612018] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 01:37:45.255 [2024-12-09 05:32:36.612023] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 01:37:45.255 [2024-12-09 05:32:36.612029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 01:37:45.255 2024/12/09 05:32:36 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:37:45.255 request: 01:37:45.255 { 01:37:45.255 "method": "bdev_nvme_attach_controller", 01:37:45.255 "params": { 01:37:45.255 "name": "nvme0", 01:37:45.255 "trtype": "tcp", 01:37:45.255 "traddr": "127.0.0.1", 01:37:45.255 "adrfam": "ipv4", 01:37:45.255 "trsvcid": "4420", 01:37:45.255 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:37:45.255 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:37:45.255 "prchk_reftag": false, 01:37:45.255 "prchk_guard": false, 01:37:45.255 "hdgst": false, 01:37:45.255 "ddgst": false, 01:37:45.255 "psk": "key1", 01:37:45.255 "allow_unrecognized_csi": false 01:37:45.255 } 01:37:45.255 } 01:37:45.255 Got JSON-RPC error response 01:37:45.255 GoRPCClient: error on JSON-RPC call 01:37:45.255 05:32:36 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:37:45.255 05:32:36 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:37:45.255 05:32:36 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:37:45.255 05:32:36 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:37:45.255 05:32:36 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 01:37:45.255 05:32:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:37:45.255 05:32:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:37:45.255 05:32:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:37:45.255 05:32:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:37:45.255 05:32:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:37:45.255 05:32:36 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 01:37:45.255 05:32:36 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 01:37:45.255 05:32:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:37:45.255 05:32:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:37:45.255 05:32:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:37:45.255 05:32:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:37:45.255 05:32:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:37:45.514 05:32:37 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 01:37:45.514 05:32:37 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 01:37:45.514 05:32:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:37:45.774 05:32:37 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 01:37:45.774 05:32:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 01:37:46.034 05:32:37 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 01:37:46.034 05:32:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:37:46.034 05:32:37 keyring_file -- keyring/file.sh@78 -- # jq length 01:37:46.294 05:32:37 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 01:37:46.294 05:32:37 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.wdFhUicW85 01:37:46.294 05:32:37 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.wdFhUicW85 01:37:46.294 05:32:37 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:37:46.294 05:32:37 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.wdFhUicW85 01:37:46.294 05:32:37 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:37:46.294 05:32:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:37:46.294 05:32:37 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:37:46.294 05:32:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:37:46.294 05:32:37 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wdFhUicW85 01:37:46.294 05:32:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wdFhUicW85 01:37:46.294 [2024-12-09 05:32:37.870572] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.wdFhUicW85': 0100660 01:37:46.294 [2024-12-09 05:32:37.870591] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:37:46.294 2024/12/09 05:32:37 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.wdFhUicW85], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 01:37:46.294 request: 01:37:46.294 { 01:37:46.294 "method": "keyring_file_add_key", 01:37:46.294 "params": { 01:37:46.294 "name": "key0", 01:37:46.294 "path": "/tmp/tmp.wdFhUicW85" 01:37:46.294 } 01:37:46.294 } 01:37:46.294 Got JSON-RPC error response 01:37:46.294 GoRPCClient: error on JSON-RPC call 01:37:46.294 05:32:37 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:37:46.294 05:32:37 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:37:46.294 05:32:37 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:37:46.294 05:32:37 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:37:46.294 05:32:37 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.wdFhUicW85 01:37:46.294 05:32:37 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wdFhUicW85 01:37:46.294 05:32:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wdFhUicW85 01:37:46.553 05:32:38 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.wdFhUicW85 01:37:46.553 05:32:38 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 01:37:46.553 05:32:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:37:46.553 05:32:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:37:46.553 05:32:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:37:46.554 05:32:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:37:46.554 05:32:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:37:46.814 05:32:38 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 01:37:46.814 05:32:38 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:37:46.814 05:32:38 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:37:46.814 05:32:38 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:37:46.814 05:32:38 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:37:46.814 05:32:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:37:46.814 05:32:38 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:37:46.814 05:32:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:37:46.814 05:32:38 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:37:46.814 05:32:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:37:47.074 [2024-12-09 05:32:38.514692] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.wdFhUicW85': No such file or directory 01:37:47.074 [2024-12-09 05:32:38.514714] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 01:37:47.074 [2024-12-09 05:32:38.514726] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 01:37:47.074 [2024-12-09 05:32:38.514731] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 01:37:47.074 [2024-12-09 05:32:38.514736] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:37:47.074 [2024-12-09 05:32:38.514740] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 01:37:47.074 2024/12/09 05:32:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 01:37:47.074 request: 01:37:47.074 { 01:37:47.074 "method": "bdev_nvme_attach_controller", 01:37:47.074 "params": { 01:37:47.074 "name": "nvme0", 01:37:47.074 "trtype": "tcp", 01:37:47.074 "traddr": "127.0.0.1", 01:37:47.074 "adrfam": "ipv4", 01:37:47.074 "trsvcid": "4420", 01:37:47.074 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:37:47.074 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:37:47.074 "prchk_reftag": false, 01:37:47.074 "prchk_guard": false, 01:37:47.074 "hdgst": false, 01:37:47.074 "ddgst": false, 01:37:47.074 "psk": "key0", 01:37:47.074 "allow_unrecognized_csi": false 01:37:47.074 } 01:37:47.074 } 01:37:47.074 Got JSON-RPC error response 01:37:47.074 GoRPCClient: error on JSON-RPC call 01:37:47.074 05:32:38 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:37:47.074 05:32:38 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:37:47.074 05:32:38 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:37:47.074 05:32:38 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:37:47.074 05:32:38 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 01:37:47.074 05:32:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:37:47.334 05:32:38 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 01:37:47.334 05:32:38 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:37:47.334 05:32:38 keyring_file -- keyring/common.sh@17 -- # name=key0 01:37:47.334 05:32:38 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:37:47.334 05:32:38 keyring_file -- keyring/common.sh@17 -- # digest=0 01:37:47.334 05:32:38 keyring_file -- keyring/common.sh@18 -- # mktemp 01:37:47.334 05:32:38 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.WXWlXt7Alq 01:37:47.334 05:32:38 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:37:47.334 05:32:38 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:37:47.334 05:32:38 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 01:37:47.334 05:32:38 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:37:47.334 05:32:38 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:37:47.334 05:32:38 keyring_file -- nvmf/common.sh@732 -- # digest=0 01:37:47.334 05:32:38 keyring_file -- nvmf/common.sh@733 -- # python - 01:37:47.334 05:32:38 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WXWlXt7Alq 01:37:47.334 05:32:38 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.WXWlXt7Alq 01:37:47.334 05:32:38 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.WXWlXt7Alq 01:37:47.334 05:32:38 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WXWlXt7Alq 01:37:47.334 05:32:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WXWlXt7Alq 01:37:47.594 05:32:38 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:37:47.594 05:32:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:37:47.855 nvme0n1 01:37:47.855 05:32:39 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 01:37:47.855 05:32:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:37:47.855 05:32:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:37:47.855 05:32:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:37:47.855 05:32:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:37:47.855 05:32:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:37:48.116 05:32:39 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 01:37:48.116 05:32:39 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 01:37:48.116 05:32:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:37:48.116 05:32:39 keyring_file -- keyring/file.sh@102 -- # get_key key0 01:37:48.116 05:32:39 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 01:37:48.116 05:32:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:37:48.116 05:32:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:37:48.116 05:32:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:37:48.378 05:32:39 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 01:37:48.378 05:32:39 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 01:37:48.378 05:32:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:37:48.378 05:32:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:37:48.378 05:32:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:37:48.378 05:32:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:37:48.378 05:32:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:37:48.674 05:32:40 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 01:37:48.674 05:32:40 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:37:48.674 05:32:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:37:48.952 05:32:40 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 01:37:48.952 05:32:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:37:48.952 05:32:40 keyring_file -- keyring/file.sh@105 -- # jq length 01:37:48.952 05:32:40 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 01:37:48.952 05:32:40 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WXWlXt7Alq 01:37:48.952 05:32:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WXWlXt7Alq 01:37:49.212 05:32:40 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.pwGg6pA3Nq 01:37:49.212 05:32:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.pwGg6pA3Nq 01:37:49.473 05:32:40 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:37:49.473 05:32:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:37:49.733 nvme0n1 01:37:49.733 05:32:41 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 01:37:49.733 05:32:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 01:37:49.994 05:32:41 keyring_file -- keyring/file.sh@113 -- # config='{ 01:37:49.994 "subsystems": [ 01:37:49.994 { 01:37:49.994 "subsystem": "keyring", 01:37:49.994 "config": [ 01:37:49.994 { 01:37:49.994 "method": "keyring_file_add_key", 01:37:49.994 "params": { 01:37:49.994 "name": "key0", 01:37:49.994 "path": "/tmp/tmp.WXWlXt7Alq" 01:37:49.994 } 01:37:49.994 }, 01:37:49.994 { 01:37:49.994 "method": "keyring_file_add_key", 01:37:49.994 "params": { 01:37:49.994 "name": "key1", 01:37:49.994 "path": "/tmp/tmp.pwGg6pA3Nq" 01:37:49.994 } 01:37:49.994 } 01:37:49.994 ] 01:37:49.994 }, 01:37:49.994 { 01:37:49.994 "subsystem": "iobuf", 01:37:49.994 "config": [ 01:37:49.994 { 01:37:49.994 "method": "iobuf_set_options", 01:37:49.994 "params": { 01:37:49.994 "enable_numa": false, 01:37:49.994 "large_bufsize": 135168, 01:37:49.994 "large_pool_count": 1024, 01:37:49.994 "small_bufsize": 8192, 01:37:49.994 "small_pool_count": 8192 01:37:49.994 } 01:37:49.994 } 01:37:49.994 ] 01:37:49.994 }, 01:37:49.994 { 01:37:49.994 "subsystem": "sock", 01:37:49.994 "config": [ 01:37:49.994 { 01:37:49.994 "method": "sock_set_default_impl", 01:37:49.994 "params": { 01:37:49.994 "impl_name": "posix" 01:37:49.994 } 01:37:49.994 }, 01:37:49.994 { 01:37:49.994 "method": "sock_impl_set_options", 01:37:49.994 "params": { 01:37:49.994 "enable_ktls": false, 01:37:49.994 "enable_placement_id": 0, 01:37:49.994 "enable_quickack": false, 01:37:49.994 "enable_recv_pipe": true, 01:37:49.994 "enable_zerocopy_send_client": false, 01:37:49.994 "enable_zerocopy_send_server": true, 01:37:49.994 "impl_name": "ssl", 01:37:49.994 "recv_buf_size": 4096, 01:37:49.994 "send_buf_size": 4096, 01:37:49.994 "tls_version": 0, 01:37:49.994 "zerocopy_threshold": 0 01:37:49.994 } 01:37:49.994 }, 01:37:49.994 { 01:37:49.994 "method": "sock_impl_set_options", 01:37:49.994 "params": { 01:37:49.994 "enable_ktls": false, 01:37:49.994 "enable_placement_id": 0, 01:37:49.994 "enable_quickack": false, 01:37:49.994 "enable_recv_pipe": true, 01:37:49.994 "enable_zerocopy_send_client": false, 01:37:49.994 "enable_zerocopy_send_server": true, 01:37:49.994 "impl_name": "posix", 01:37:49.994 "recv_buf_size": 2097152, 01:37:49.994 "send_buf_size": 2097152, 01:37:49.994 "tls_version": 0, 01:37:49.994 "zerocopy_threshold": 0 01:37:49.994 } 01:37:49.994 } 01:37:49.994 ] 01:37:49.994 }, 01:37:49.994 { 01:37:49.994 "subsystem": "vmd", 01:37:49.994 "config": [] 01:37:49.994 }, 01:37:49.994 { 01:37:49.994 "subsystem": "accel", 01:37:49.994 "config": [ 01:37:49.994 { 01:37:49.994 "method": "accel_set_options", 01:37:49.994 "params": { 01:37:49.994 "buf_count": 2048, 01:37:49.994 "large_cache_size": 16, 01:37:49.994 "sequence_count": 2048, 01:37:49.994 "small_cache_size": 128, 01:37:49.994 "task_count": 2048 01:37:49.994 } 01:37:49.994 } 01:37:49.994 ] 01:37:49.994 }, 01:37:49.994 { 01:37:49.994 "subsystem": "bdev", 01:37:49.994 "config": [ 01:37:49.994 { 01:37:49.994 "method": "bdev_set_options", 01:37:49.994 "params": { 01:37:49.994 "bdev_auto_examine": true, 01:37:49.994 "bdev_io_cache_size": 256, 01:37:49.994 "bdev_io_pool_size": 65535, 01:37:49.994 "iobuf_large_cache_size": 16, 01:37:49.994 "iobuf_small_cache_size": 128 01:37:49.994 } 01:37:49.994 }, 01:37:49.994 { 01:37:49.994 "method": "bdev_raid_set_options", 01:37:49.994 "params": { 01:37:49.994 "process_max_bandwidth_mb_sec": 0, 01:37:49.994 "process_window_size_kb": 1024 01:37:49.994 } 01:37:49.994 }, 01:37:49.994 { 01:37:49.994 "method": "bdev_iscsi_set_options", 01:37:49.994 "params": { 01:37:49.994 "timeout_sec": 30 01:37:49.994 } 01:37:49.994 }, 01:37:49.994 { 01:37:49.994 "method": "bdev_nvme_set_options", 01:37:49.994 "params": { 01:37:49.994 "action_on_timeout": "none", 01:37:49.994 "allow_accel_sequence": false, 01:37:49.994 "arbitration_burst": 0, 01:37:49.994 "bdev_retry_count": 3, 01:37:49.994 "ctrlr_loss_timeout_sec": 0, 01:37:49.994 "delay_cmd_submit": true, 01:37:49.994 "dhchap_dhgroups": [ 01:37:49.994 "null", 01:37:49.994 "ffdhe2048", 01:37:49.994 "ffdhe3072", 01:37:49.994 "ffdhe4096", 01:37:49.994 "ffdhe6144", 01:37:49.994 "ffdhe8192" 01:37:49.994 ], 01:37:49.994 "dhchap_digests": [ 01:37:49.994 "sha256", 01:37:49.994 "sha384", 01:37:49.994 "sha512" 01:37:49.994 ], 01:37:49.994 "disable_auto_failback": false, 01:37:49.995 "fast_io_fail_timeout_sec": 0, 01:37:49.995 "generate_uuids": false, 01:37:49.995 "high_priority_weight": 0, 01:37:49.995 "io_path_stat": false, 01:37:49.995 "io_queue_requests": 512, 01:37:49.995 "keep_alive_timeout_ms": 10000, 01:37:49.995 "low_priority_weight": 0, 01:37:49.995 "medium_priority_weight": 0, 01:37:49.995 "nvme_adminq_poll_period_us": 10000, 01:37:49.995 "nvme_error_stat": false, 01:37:49.995 "nvme_ioq_poll_period_us": 0, 01:37:49.995 "rdma_cm_event_timeout_ms": 0, 01:37:49.995 "rdma_max_cq_size": 0, 01:37:49.995 "rdma_srq_size": 0, 01:37:49.995 "reconnect_delay_sec": 0, 01:37:49.995 "timeout_admin_us": 0, 01:37:49.995 "timeout_us": 0, 01:37:49.995 "transport_ack_timeout": 0, 01:37:49.995 "transport_retry_count": 4, 01:37:49.995 "transport_tos": 0 01:37:49.995 } 01:37:49.995 }, 01:37:49.995 { 01:37:49.995 "method": "bdev_nvme_attach_controller", 01:37:49.995 "params": { 01:37:49.995 "adrfam": "IPv4", 01:37:49.995 "ctrlr_loss_timeout_sec": 0, 01:37:49.995 "ddgst": false, 01:37:49.995 "fast_io_fail_timeout_sec": 0, 01:37:49.995 "hdgst": false, 01:37:49.995 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:37:49.995 "multipath": "multipath", 01:37:49.995 "name": "nvme0", 01:37:49.995 "prchk_guard": false, 01:37:49.995 "prchk_reftag": false, 01:37:49.995 "psk": "key0", 01:37:49.995 "reconnect_delay_sec": 0, 01:37:49.995 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:37:49.995 "traddr": "127.0.0.1", 01:37:49.995 "trsvcid": "4420", 01:37:49.995 "trtype": "TCP" 01:37:49.995 } 01:37:49.995 }, 01:37:49.995 { 01:37:49.995 "method": "bdev_nvme_set_hotplug", 01:37:49.995 "params": { 01:37:49.995 "enable": false, 01:37:49.995 "period_us": 100000 01:37:49.995 } 01:37:49.995 }, 01:37:49.995 { 01:37:49.995 "method": "bdev_wait_for_examine" 01:37:49.995 } 01:37:49.995 ] 01:37:49.995 }, 01:37:49.995 { 01:37:49.995 "subsystem": "nbd", 01:37:49.995 "config": [] 01:37:49.995 } 01:37:49.995 ] 01:37:49.995 }' 01:37:49.995 05:32:41 keyring_file -- keyring/file.sh@115 -- # killprocess 109996 01:37:49.995 05:32:41 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 109996 ']' 01:37:49.995 05:32:41 keyring_file -- common/autotest_common.sh@958 -- # kill -0 109996 01:37:49.995 05:32:41 keyring_file -- common/autotest_common.sh@959 -- # uname 01:37:49.995 05:32:41 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:37:49.995 05:32:41 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109996 01:37:49.995 killing process with pid 109996 01:37:49.995 Received shutdown signal, test time was about 1.000000 seconds 01:37:49.995 01:37:49.995 Latency(us) 01:37:49.995 [2024-12-09T05:32:41.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:37:49.995 [2024-12-09T05:32:41.612Z] =================================================================================================================== 01:37:49.995 [2024-12-09T05:32:41.612Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:37:49.995 05:32:41 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:37:49.995 05:32:41 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:37:49.995 05:32:41 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109996' 01:37:49.995 05:32:41 keyring_file -- common/autotest_common.sh@973 -- # kill 109996 01:37:49.995 05:32:41 keyring_file -- common/autotest_common.sh@978 -- # wait 109996 01:37:50.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:37:50.257 05:32:41 keyring_file -- keyring/file.sh@118 -- # bperfpid=110453 01:37:50.257 05:32:41 keyring_file -- keyring/file.sh@120 -- # waitforlisten 110453 /var/tmp/bperf.sock 01:37:50.257 05:32:41 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 110453 ']' 01:37:50.257 05:32:41 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:37:50.257 05:32:41 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 01:37:50.257 05:32:41 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:37:50.257 05:32:41 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 01:37:50.257 05:32:41 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 01:37:50.257 05:32:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:37:50.257 05:32:41 keyring_file -- keyring/file.sh@116 -- # echo '{ 01:37:50.257 "subsystems": [ 01:37:50.257 { 01:37:50.257 "subsystem": "keyring", 01:37:50.257 "config": [ 01:37:50.257 { 01:37:50.257 "method": "keyring_file_add_key", 01:37:50.257 "params": { 01:37:50.257 "name": "key0", 01:37:50.257 "path": "/tmp/tmp.WXWlXt7Alq" 01:37:50.257 } 01:37:50.257 }, 01:37:50.257 { 01:37:50.257 "method": "keyring_file_add_key", 01:37:50.257 "params": { 01:37:50.257 "name": "key1", 01:37:50.257 "path": "/tmp/tmp.pwGg6pA3Nq" 01:37:50.257 } 01:37:50.257 } 01:37:50.257 ] 01:37:50.257 }, 01:37:50.257 { 01:37:50.257 "subsystem": "iobuf", 01:37:50.257 "config": [ 01:37:50.257 { 01:37:50.257 "method": "iobuf_set_options", 01:37:50.257 "params": { 01:37:50.257 "enable_numa": false, 01:37:50.257 "large_bufsize": 135168, 01:37:50.257 "large_pool_count": 1024, 01:37:50.257 "small_bufsize": 8192, 01:37:50.257 "small_pool_count": 8192 01:37:50.257 } 01:37:50.257 } 01:37:50.257 ] 01:37:50.257 }, 01:37:50.257 { 01:37:50.257 "subsystem": "sock", 01:37:50.257 "config": [ 01:37:50.257 { 01:37:50.257 "method": "sock_set_default_impl", 01:37:50.257 "params": { 01:37:50.257 "impl_name": "posix" 01:37:50.257 } 01:37:50.257 }, 01:37:50.257 { 01:37:50.257 "method": "sock_impl_set_options", 01:37:50.257 "params": { 01:37:50.257 "enable_ktls": false, 01:37:50.257 "enable_placement_id": 0, 01:37:50.257 "enable_quickack": false, 01:37:50.257 "enable_recv_pipe": true, 01:37:50.257 "enable_zerocopy_send_client": false, 01:37:50.257 "enable_zerocopy_send_server": true, 01:37:50.257 "impl_name": "ssl", 01:37:50.257 "recv_buf_size": 4096, 01:37:50.257 "send_buf_size": 4096, 01:37:50.257 "tls_version": 0, 01:37:50.257 "zerocopy_threshold": 0 01:37:50.257 } 01:37:50.257 }, 01:37:50.257 { 01:37:50.257 "method": "sock_impl_set_options", 01:37:50.257 "params": { 01:37:50.257 "enable_ktls": false, 01:37:50.257 "enable_placement_id": 0, 01:37:50.257 "enable_quickack": false, 01:37:50.257 "enable_recv_pipe": true, 01:37:50.257 "enable_zerocopy_send_client": false, 01:37:50.257 "enable_zerocopy_send_server": true, 01:37:50.257 "impl_name": "posix", 01:37:50.257 "recv_buf_size": 2097152, 01:37:50.257 "send_buf_size": 2097152, 01:37:50.257 "tls_version": 0, 01:37:50.257 "zerocopy_threshold": 0 01:37:50.257 } 01:37:50.257 } 01:37:50.257 ] 01:37:50.257 }, 01:37:50.257 { 01:37:50.257 "subsystem": "vmd", 01:37:50.257 "config": [] 01:37:50.257 }, 01:37:50.257 { 01:37:50.257 "subsystem": "accel", 01:37:50.257 "config": [ 01:37:50.257 { 01:37:50.257 "method": "accel_set_options", 01:37:50.257 "params": { 01:37:50.257 "buf_count": 2048, 01:37:50.257 "large_cache_size": 16, 01:37:50.257 "sequence_count": 2048, 01:37:50.257 "small_cache_size": 128, 01:37:50.257 "task_count": 2048 01:37:50.257 } 01:37:50.257 } 01:37:50.257 ] 01:37:50.257 }, 01:37:50.257 { 01:37:50.257 "subsystem": "bdev", 01:37:50.257 "config": [ 01:37:50.257 { 01:37:50.257 "method": "bdev_set_options", 01:37:50.257 "params": { 01:37:50.257 "bdev_auto_examine": true, 01:37:50.257 "bdev_io_cache_size": 256, 01:37:50.257 "bdev_io_pool_size": 65535, 01:37:50.257 "iobuf_large_cache_size": 16, 01:37:50.257 "iobuf_small_cache_size": 128 01:37:50.257 } 01:37:50.257 }, 01:37:50.257 { 01:37:50.257 "method": "bdev_raid_set_options", 01:37:50.257 "params": { 01:37:50.257 "process_max_bandwidth_mb_sec": 0, 01:37:50.257 "process_window_size_kb": 1024 01:37:50.257 } 01:37:50.257 }, 01:37:50.257 { 01:37:50.257 "method": "bdev_iscsi_set_options", 01:37:50.257 "params": { 01:37:50.257 "timeout_sec": 30 01:37:50.257 } 01:37:50.257 }, 01:37:50.257 { 01:37:50.257 "method": "bdev_nvme_set_options", 01:37:50.257 "params": { 01:37:50.257 "action_on_timeout": "none", 01:37:50.257 "allow_accel_sequence": false, 01:37:50.257 "arbitration_burst": 0, 01:37:50.257 "bdev_retry_count": 3, 01:37:50.257 "ctrlr_loss_timeout_sec": 0, 01:37:50.257 "delay_cmd_submit": true, 01:37:50.257 "dhchap_dhgroups": [ 01:37:50.257 "null", 01:37:50.257 "ffdhe2048", 01:37:50.257 "ffdhe3072", 01:37:50.257 "ffdhe4096", 01:37:50.257 "ffdhe6144", 01:37:50.257 "ffdhe8192" 01:37:50.257 ], 01:37:50.257 "dhchap_digests": [ 01:37:50.257 "sha256", 01:37:50.257 "sha384", 01:37:50.257 "sha512" 01:37:50.257 ], 01:37:50.257 "disable_auto_failback": false, 01:37:50.257 "fast_io_fail_timeout_sec": 0, 01:37:50.257 "generate_uuids": false, 01:37:50.257 "high_priority_weight": 0, 01:37:50.257 "io_path_stat": false, 01:37:50.257 "io_queue_requests": 512, 01:37:50.257 "keep_alive_timeout_ms": 10000, 01:37:50.257 "low_priority_weight": 0, 01:37:50.257 "medium_priority_weight": 0, 01:37:50.257 "nvme_adminq_poll_period_us": 10000, 01:37:50.257 "nvme_error_stat": false, 01:37:50.257 "nvme_ioq_poll_period_us": 0, 01:37:50.257 "rdma_cm_event_timeout_ms": 0, 01:37:50.257 "rdma_max_cq_size": 0, 01:37:50.257 "rdma_srq_size": 0, 01:37:50.257 "reconnect_delay_sec": 0, 01:37:50.257 "timeout_admin_us": 0, 01:37:50.257 "timeout_us": 0, 01:37:50.257 "transport_ack_timeout": 0, 01:37:50.257 "transport_retry_count": 4, 01:37:50.257 "transport_tos": 0 01:37:50.257 } 01:37:50.257 }, 01:37:50.257 { 01:37:50.257 "method": "bdev_nvme_attach_controller", 01:37:50.257 "params": { 01:37:50.257 "adrfam": "IPv4", 01:37:50.257 "ctrlr_loss_timeout_sec": 0, 01:37:50.257 "ddgst": false, 01:37:50.257 "fast_io_fail_timeout_sec": 0, 01:37:50.257 "hdgst": false, 01:37:50.257 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:37:50.257 "multipath": "multipath", 01:37:50.257 "name": "nvme0", 01:37:50.257 "prchk_guard": false, 01:37:50.257 "prchk_reftag": false, 01:37:50.257 "psk": "key0", 01:37:50.257 "reconnect_delay_sec": 0, 01:37:50.257 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:37:50.257 "traddr": "127.0.0.1", 01:37:50.257 "trsvcid": "4420", 01:37:50.257 "trtype": "TCP" 01:37:50.257 } 01:37:50.257 }, 01:37:50.257 { 01:37:50.257 "method": "bdev_nvme_set_hotplug", 01:37:50.257 "params": { 01:37:50.257 "enable": false, 01:37:50.257 "period_us": 100000 01:37:50.257 } 01:37:50.257 }, 01:37:50.257 { 01:37:50.257 "method": "bdev_wait_for_examine" 01:37:50.257 } 01:37:50.257 ] 01:37:50.257 }, 01:37:50.257 { 01:37:50.257 "subsystem": "nbd", 01:37:50.257 "config": [] 01:37:50.257 } 01:37:50.257 ] 01:37:50.257 }' 01:37:50.257 [2024-12-09 05:32:41.673578] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:37:50.258 [2024-12-09 05:32:41.673639] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110453 ] 01:37:50.258 [2024-12-09 05:32:41.809423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:37:50.258 [2024-12-09 05:32:41.839984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:37:50.519 [2024-12-09 05:32:41.983040] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:37:51.091 05:32:42 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:37:51.091 05:32:42 keyring_file -- common/autotest_common.sh@868 -- # return 0 01:37:51.091 05:32:42 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 01:37:51.091 05:32:42 keyring_file -- keyring/file.sh@121 -- # jq length 01:37:51.091 05:32:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:37:51.352 05:32:42 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 01:37:51.352 05:32:42 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 01:37:51.352 05:32:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:37:51.352 05:32:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:37:51.352 05:32:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:37:51.352 05:32:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:37:51.352 05:32:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:37:51.614 05:32:43 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 01:37:51.614 05:32:43 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 01:37:51.614 05:32:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:37:51.614 05:32:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:37:51.614 05:32:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:37:51.614 05:32:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:37:51.614 05:32:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:37:51.875 05:32:43 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 01:37:51.875 05:32:43 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 01:37:51.875 05:32:43 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 01:37:51.875 05:32:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 01:37:51.875 05:32:43 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 01:37:51.875 05:32:43 keyring_file -- keyring/file.sh@1 -- # cleanup 01:37:51.875 05:32:43 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.WXWlXt7Alq /tmp/tmp.pwGg6pA3Nq 01:37:51.875 05:32:43 keyring_file -- keyring/file.sh@20 -- # killprocess 110453 01:37:51.875 05:32:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 110453 ']' 01:37:51.875 05:32:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 110453 01:37:51.875 05:32:43 keyring_file -- common/autotest_common.sh@959 -- # uname 01:37:51.875 05:32:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:37:51.875 05:32:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110453 01:37:51.875 killing process with pid 110453 01:37:51.875 Received shutdown signal, test time was about 1.000000 seconds 01:37:51.875 01:37:51.875 Latency(us) 01:37:51.875 [2024-12-09T05:32:43.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:37:51.875 [2024-12-09T05:32:43.492Z] =================================================================================================================== 01:37:51.875 [2024-12-09T05:32:43.492Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:37:51.875 05:32:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:37:51.875 05:32:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:37:51.875 05:32:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110453' 01:37:51.875 05:32:43 keyring_file -- common/autotest_common.sh@973 -- # kill 110453 01:37:51.875 05:32:43 keyring_file -- common/autotest_common.sh@978 -- # wait 110453 01:37:52.134 05:32:43 keyring_file -- keyring/file.sh@21 -- # killprocess 109979 01:37:52.134 05:32:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 109979 ']' 01:37:52.134 05:32:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 109979 01:37:52.134 05:32:43 keyring_file -- common/autotest_common.sh@959 -- # uname 01:37:52.134 05:32:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:37:52.134 05:32:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109979 01:37:52.134 killing process with pid 109979 01:37:52.134 05:32:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:37:52.134 05:32:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:37:52.134 05:32:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109979' 01:37:52.134 05:32:43 keyring_file -- common/autotest_common.sh@973 -- # kill 109979 01:37:52.134 05:32:43 keyring_file -- common/autotest_common.sh@978 -- # wait 109979 01:37:52.392 ************************************ 01:37:52.392 END TEST keyring_file 01:37:52.392 ************************************ 01:37:52.392 01:37:52.392 real 0m12.837s 01:37:52.392 user 0m31.965s 01:37:52.392 sys 0m2.423s 01:37:52.392 05:32:43 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 01:37:52.392 05:32:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:37:52.392 05:32:43 -- spdk/autotest.sh@293 -- # [[ y == y ]] 01:37:52.392 05:32:43 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 01:37:52.393 05:32:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:37:52.393 05:32:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:37:52.393 05:32:43 -- common/autotest_common.sh@10 -- # set +x 01:37:52.393 ************************************ 01:37:52.393 START TEST keyring_linux 01:37:52.393 ************************************ 01:37:52.393 05:32:43 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 01:37:52.393 Joined session keyring: 197162866 01:37:52.393 * Looking for test storage... 01:37:52.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 01:37:52.393 05:32:43 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:37:52.393 05:32:43 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 01:37:52.393 05:32:43 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:37:52.652 05:32:44 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@345 -- # : 1 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@365 -- # decimal 1 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@353 -- # local d=1 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@355 -- # echo 1 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@366 -- # decimal 2 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@353 -- # local d=2 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@355 -- # echo 2 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@368 -- # return 0 01:37:52.652 05:32:44 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:37:52.652 05:32:44 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:37:52.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:37:52.652 --rc genhtml_branch_coverage=1 01:37:52.652 --rc genhtml_function_coverage=1 01:37:52.652 --rc genhtml_legend=1 01:37:52.652 --rc geninfo_all_blocks=1 01:37:52.652 --rc geninfo_unexecuted_blocks=1 01:37:52.652 01:37:52.652 ' 01:37:52.652 05:32:44 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:37:52.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:37:52.652 --rc genhtml_branch_coverage=1 01:37:52.652 --rc genhtml_function_coverage=1 01:37:52.652 --rc genhtml_legend=1 01:37:52.652 --rc geninfo_all_blocks=1 01:37:52.652 --rc geninfo_unexecuted_blocks=1 01:37:52.652 01:37:52.652 ' 01:37:52.652 05:32:44 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:37:52.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:37:52.652 --rc genhtml_branch_coverage=1 01:37:52.652 --rc genhtml_function_coverage=1 01:37:52.652 --rc genhtml_legend=1 01:37:52.652 --rc geninfo_all_blocks=1 01:37:52.652 --rc geninfo_unexecuted_blocks=1 01:37:52.652 01:37:52.652 ' 01:37:52.652 05:32:44 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:37:52.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:37:52.652 --rc genhtml_branch_coverage=1 01:37:52.652 --rc genhtml_function_coverage=1 01:37:52.652 --rc genhtml_legend=1 01:37:52.652 --rc geninfo_all_blocks=1 01:37:52.652 --rc geninfo_unexecuted_blocks=1 01:37:52.652 01:37:52.652 ' 01:37:52.652 05:32:44 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 01:37:52.652 05:32:44 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:37:52.652 05:32:44 keyring_linux -- nvmf/common.sh@7 -- # uname -s 01:37:52.652 05:32:44 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:37:52.652 05:32:44 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:37:52.652 05:32:44 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:37:52.652 05:32:44 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:37:52.652 05:32:44 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:37:52.652 05:32:44 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:37:52.652 05:32:44 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:37:52.652 05:32:44 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:37:52.652 05:32:44 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:37:52.652 05:32:44 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:37:52.652 05:32:44 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:717d4fb6-9228-45ae-9435-1e6ea8cff784 01:37:52.652 05:32:44 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=717d4fb6-9228-45ae-9435-1e6ea8cff784 01:37:52.652 05:32:44 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:37:52.652 05:32:44 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:37:52.652 05:32:44 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:37:52.652 05:32:44 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:37:52.652 05:32:44 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:37:52.652 05:32:44 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:37:52.652 05:32:44 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:37:52.653 05:32:44 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:37:52.653 05:32:44 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:37:52.653 05:32:44 keyring_linux -- paths/export.sh@5 -- # export PATH 01:37:52.653 05:32:44 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:37:52.653 05:32:44 keyring_linux -- nvmf/common.sh@51 -- # : 0 01:37:52.653 05:32:44 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:37:52.653 05:32:44 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:37:52.653 05:32:44 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:37:52.653 05:32:44 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:37:52.653 05:32:44 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:37:52.653 05:32:44 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:37:52.653 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:37:52.653 05:32:44 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:37:52.653 05:32:44 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:37:52.653 05:32:44 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 01:37:52.653 05:32:44 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 01:37:52.653 05:32:44 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 01:37:52.653 05:32:44 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 01:37:52.653 05:32:44 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 01:37:52.653 05:32:44 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 01:37:52.653 05:32:44 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 01:37:52.653 05:32:44 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 01:37:52.653 05:32:44 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 01:37:52.653 05:32:44 keyring_linux -- keyring/common.sh@17 -- # name=key0 01:37:52.653 05:32:44 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:37:52.653 05:32:44 keyring_linux -- keyring/common.sh@17 -- # digest=0 01:37:52.653 05:32:44 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 01:37:52.653 05:32:44 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:37:52.653 05:32:44 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:37:52.653 05:32:44 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 01:37:52.653 05:32:44 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:37:52.653 05:32:44 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:37:52.653 05:32:44 keyring_linux -- nvmf/common.sh@732 -- # digest=0 01:37:52.653 05:32:44 keyring_linux -- nvmf/common.sh@733 -- # python - 01:37:52.653 05:32:44 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 01:37:52.653 05:32:44 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 01:37:52.653 /tmp/:spdk-test:key0 01:37:52.653 05:32:44 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 01:37:52.653 05:32:44 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 01:37:52.653 05:32:44 keyring_linux -- keyring/common.sh@17 -- # name=key1 01:37:52.653 05:32:44 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 01:37:52.653 05:32:44 keyring_linux -- keyring/common.sh@17 -- # digest=0 01:37:52.653 05:32:44 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 01:37:52.653 05:32:44 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 01:37:52.653 05:32:44 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 01:37:52.653 05:32:44 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 01:37:52.653 05:32:44 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:37:52.653 05:32:44 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 01:37:52.653 05:32:44 keyring_linux -- nvmf/common.sh@732 -- # digest=0 01:37:52.653 05:32:44 keyring_linux -- nvmf/common.sh@733 -- # python - 01:37:52.653 05:32:44 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 01:37:52.653 /tmp/:spdk-test:key1 01:37:52.653 05:32:44 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 01:37:52.653 05:32:44 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=110605 01:37:52.653 05:32:44 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 110605 01:37:52.653 05:32:44 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 110605 ']' 01:37:52.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:37:52.653 05:32:44 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:37:52.653 05:32:44 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:37:52.653 05:32:44 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 01:37:52.653 05:32:44 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:37:52.653 05:32:44 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 01:37:52.653 05:32:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:37:52.653 [2024-12-09 05:32:44.209293] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:37:52.653 [2024-12-09 05:32:44.209377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110605 ] 01:37:52.912 [2024-12-09 05:32:44.349539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:37:52.912 [2024-12-09 05:32:44.380247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:37:53.478 05:32:45 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:37:53.478 05:32:45 keyring_linux -- common/autotest_common.sh@868 -- # return 0 01:37:53.478 05:32:45 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 01:37:53.478 05:32:45 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 01:37:53.478 05:32:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:37:53.478 [2024-12-09 05:32:45.040649] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:37:53.478 null0 01:37:53.478 [2024-12-09 05:32:45.072621] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:37:53.478 [2024-12-09 05:32:45.072741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:37:53.478 05:32:45 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:37:53.478 05:32:45 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 01:37:53.736 424066178 01:37:53.736 05:32:45 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 01:37:53.736 99851947 01:37:53.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:37:53.736 05:32:45 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=110641 01:37:53.736 05:32:45 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 110641 /var/tmp/bperf.sock 01:37:53.736 05:32:45 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 110641 ']' 01:37:53.736 05:32:45 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:37:53.736 05:32:45 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 01:37:53.736 05:32:45 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:37:53.736 05:32:45 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 01:37:53.736 05:32:45 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 01:37:53.736 05:32:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:37:53.736 [2024-12-09 05:32:45.140789] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:37:53.736 [2024-12-09 05:32:45.140848] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110641 ] 01:37:53.736 [2024-12-09 05:32:45.279026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:37:53.736 [2024-12-09 05:32:45.314686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:37:54.665 05:32:46 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:37:54.665 05:32:46 keyring_linux -- common/autotest_common.sh@868 -- # return 0 01:37:54.665 05:32:46 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 01:37:54.665 05:32:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 01:37:54.665 05:32:46 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 01:37:54.665 05:32:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:37:54.922 05:32:46 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 01:37:54.922 05:32:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 01:37:55.177 [2024-12-09 05:32:46.665222] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:37:55.177 nvme0n1 01:37:55.177 05:32:46 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 01:37:55.177 05:32:46 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 01:37:55.177 05:32:46 keyring_linux -- keyring/linux.sh@20 -- # local sn 01:37:55.177 05:32:46 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 01:37:55.177 05:32:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:37:55.177 05:32:46 keyring_linux -- keyring/linux.sh@22 -- # jq length 01:37:55.434 05:32:46 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 01:37:55.434 05:32:46 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 01:37:55.434 05:32:46 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 01:37:55.434 05:32:46 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 01:37:55.434 05:32:46 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 01:37:55.434 05:32:46 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:37:55.434 05:32:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:37:55.691 05:32:47 keyring_linux -- keyring/linux.sh@25 -- # sn=424066178 01:37:55.691 05:32:47 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 01:37:55.691 05:32:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 01:37:55.691 05:32:47 keyring_linux -- keyring/linux.sh@26 -- # [[ 424066178 == \4\2\4\0\6\6\1\7\8 ]] 01:37:55.691 05:32:47 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 424066178 01:37:55.691 05:32:47 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 01:37:55.691 05:32:47 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:37:55.691 Running I/O for 1 seconds... 01:37:57.061 24049.00 IOPS, 93.94 MiB/s 01:37:57.061 Latency(us) 01:37:57.061 [2024-12-09T05:32:48.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:37:57.061 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:37:57.061 nvme0n1 : 1.01 24049.34 93.94 0.00 0.00 5306.52 2218.14 6805.66 01:37:57.061 [2024-12-09T05:32:48.678Z] =================================================================================================================== 01:37:57.061 [2024-12-09T05:32:48.678Z] Total : 24049.34 93.94 0.00 0.00 5306.52 2218.14 6805.66 01:37:57.061 { 01:37:57.061 "results": [ 01:37:57.061 { 01:37:57.061 "job": "nvme0n1", 01:37:57.061 "core_mask": "0x2", 01:37:57.061 "workload": "randread", 01:37:57.061 "status": "finished", 01:37:57.061 "queue_depth": 128, 01:37:57.061 "io_size": 4096, 01:37:57.061 "runtime": 1.00535, 01:37:57.061 "iops": 24049.33605212115, 01:37:57.061 "mibps": 93.94271895359824, 01:37:57.061 "io_failed": 0, 01:37:57.061 "io_timeout": 0, 01:37:57.061 "avg_latency_us": 5306.522202383603, 01:37:57.061 "min_latency_us": 2218.1415384615384, 01:37:57.061 "max_latency_us": 6805.661538461539 01:37:57.061 } 01:37:57.061 ], 01:37:57.061 "core_count": 1 01:37:57.061 } 01:37:57.061 05:32:48 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:37:57.061 05:32:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:37:57.061 05:32:48 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 01:37:57.061 05:32:48 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 01:37:57.061 05:32:48 keyring_linux -- keyring/linux.sh@20 -- # local sn 01:37:57.061 05:32:48 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 01:37:57.061 05:32:48 keyring_linux -- keyring/linux.sh@22 -- # jq length 01:37:57.061 05:32:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:37:57.318 05:32:48 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 01:37:57.318 05:32:48 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 01:37:57.318 05:32:48 keyring_linux -- keyring/linux.sh@23 -- # return 01:37:57.318 05:32:48 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:37:57.318 05:32:48 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 01:37:57.318 05:32:48 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:37:57.318 05:32:48 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:37:57.318 05:32:48 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:37:57.318 05:32:48 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:37:57.318 05:32:48 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:37:57.318 05:32:48 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:37:57.318 05:32:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:37:57.318 [2024-12-09 05:32:48.903744] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:37:57.318 [2024-12-09 05:32:48.903906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f324b0 (107): Transport endpoint is not connected 01:37:57.318 [2024-12-09 05:32:48.904898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f324b0 (9): Bad file descriptor 01:37:57.318 [2024-12-09 05:32:48.905896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 01:37:57.318 [2024-12-09 05:32:48.905915] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 01:37:57.318 [2024-12-09 05:32:48.905922] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 01:37:57.318 [2024-12-09 05:32:48.905929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 01:37:57.318 2024/12/09 05:32:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:37:57.318 request: 01:37:57.318 { 01:37:57.318 "method": "bdev_nvme_attach_controller", 01:37:57.318 "params": { 01:37:57.318 "name": "nvme0", 01:37:57.318 "trtype": "tcp", 01:37:57.318 "traddr": "127.0.0.1", 01:37:57.318 "adrfam": "ipv4", 01:37:57.318 "trsvcid": "4420", 01:37:57.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:37:57.318 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:37:57.318 "prchk_reftag": false, 01:37:57.319 "prchk_guard": false, 01:37:57.319 "hdgst": false, 01:37:57.319 "ddgst": false, 01:37:57.319 "psk": ":spdk-test:key1", 01:37:57.319 "allow_unrecognized_csi": false 01:37:57.319 } 01:37:57.319 } 01:37:57.319 Got JSON-RPC error response 01:37:57.319 GoRPCClient: error on JSON-RPC call 01:37:57.319 05:32:48 keyring_linux -- common/autotest_common.sh@655 -- # es=1 01:37:57.319 05:32:48 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:37:57.319 05:32:48 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:37:57.319 05:32:48 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:37:57.319 05:32:48 keyring_linux -- keyring/linux.sh@1 -- # cleanup 01:37:57.319 05:32:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 01:37:57.319 05:32:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 01:37:57.319 05:32:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 01:37:57.319 05:32:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 01:37:57.319 05:32:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 01:37:57.319 05:32:48 keyring_linux -- keyring/linux.sh@33 -- # sn=424066178 01:37:57.319 05:32:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 424066178 01:37:57.319 1 links removed 01:37:57.319 05:32:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 01:37:57.319 05:32:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 01:37:57.319 05:32:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 01:37:57.319 05:32:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 01:37:57.319 05:32:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 01:37:57.575 05:32:48 keyring_linux -- keyring/linux.sh@33 -- # sn=99851947 01:37:57.575 05:32:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 99851947 01:37:57.575 1 links removed 01:37:57.575 05:32:48 keyring_linux -- keyring/linux.sh@41 -- # killprocess 110641 01:37:57.575 05:32:48 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 110641 ']' 01:37:57.575 05:32:48 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 110641 01:37:57.575 05:32:48 keyring_linux -- common/autotest_common.sh@959 -- # uname 01:37:57.575 05:32:48 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:37:57.575 05:32:48 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110641 01:37:57.575 05:32:48 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:37:57.575 killing process with pid 110641 01:37:57.575 Received shutdown signal, test time was about 1.000000 seconds 01:37:57.575 01:37:57.575 Latency(us) 01:37:57.575 [2024-12-09T05:32:49.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:37:57.575 [2024-12-09T05:32:49.193Z] =================================================================================================================== 01:37:57.576 [2024-12-09T05:32:49.193Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:37:57.576 05:32:48 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:37:57.576 05:32:48 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110641' 01:37:57.576 05:32:48 keyring_linux -- common/autotest_common.sh@973 -- # kill 110641 01:37:57.576 05:32:48 keyring_linux -- common/autotest_common.sh@978 -- # wait 110641 01:37:57.576 05:32:49 keyring_linux -- keyring/linux.sh@42 -- # killprocess 110605 01:37:57.576 05:32:49 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 110605 ']' 01:37:57.576 05:32:49 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 110605 01:37:57.576 05:32:49 keyring_linux -- common/autotest_common.sh@959 -- # uname 01:37:57.576 05:32:49 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:37:57.576 05:32:49 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110605 01:37:57.576 killing process with pid 110605 01:37:57.576 05:32:49 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:37:57.576 05:32:49 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:37:57.576 05:32:49 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110605' 01:37:57.576 05:32:49 keyring_linux -- common/autotest_common.sh@973 -- # kill 110605 01:37:57.576 05:32:49 keyring_linux -- common/autotest_common.sh@978 -- # wait 110605 01:37:57.953 01:37:57.953 real 0m5.446s 01:37:57.953 user 0m10.520s 01:37:57.953 sys 0m1.170s 01:37:57.953 05:32:49 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 01:37:57.953 ************************************ 01:37:57.953 END TEST keyring_linux 01:37:57.953 ************************************ 01:37:57.953 05:32:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:37:57.953 05:32:49 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 01:37:57.953 05:32:49 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 01:37:57.953 05:32:49 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 01:37:57.953 05:32:49 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 01:37:57.953 05:32:49 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 01:37:57.953 05:32:49 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 01:37:57.953 05:32:49 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 01:37:57.953 05:32:49 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 01:37:57.953 05:32:49 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 01:37:57.953 05:32:49 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 01:37:57.953 05:32:49 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 01:37:57.953 05:32:49 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 01:37:57.953 05:32:49 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 01:37:57.953 05:32:49 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 01:37:57.953 05:32:49 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 01:37:57.953 05:32:49 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 01:37:57.953 05:32:49 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 01:37:57.953 05:32:49 -- common/autotest_common.sh@726 -- # xtrace_disable 01:37:57.953 05:32:49 -- common/autotest_common.sh@10 -- # set +x 01:37:57.953 05:32:49 -- spdk/autotest.sh@388 -- # autotest_cleanup 01:37:57.953 05:32:49 -- common/autotest_common.sh@1396 -- # local autotest_es=0 01:37:57.953 05:32:49 -- common/autotest_common.sh@1397 -- # xtrace_disable 01:37:57.953 05:32:49 -- common/autotest_common.sh@10 -- # set +x 01:37:59.332 INFO: APP EXITING 01:37:59.332 INFO: killing all VMs 01:37:59.332 INFO: killing vhost app 01:37:59.332 INFO: EXIT DONE 01:37:59.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:37:59.855 0000:00:11.0 (1b36 0010): Already using the nvme driver 01:37:59.855 0000:00:10.0 (1b36 0010): Already using the nvme driver 01:38:00.430 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:38:00.430 Cleaning 01:38:00.430 Removing: /var/run/dpdk/spdk0/config 01:38:00.430 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 01:38:00.430 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 01:38:00.430 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 01:38:00.430 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 01:38:00.430 Removing: /var/run/dpdk/spdk0/fbarray_memzone 01:38:00.430 Removing: /var/run/dpdk/spdk0/hugepage_info 01:38:00.430 Removing: /var/run/dpdk/spdk1/config 01:38:00.430 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 01:38:00.430 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 01:38:00.430 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 01:38:00.430 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 01:38:00.430 Removing: /var/run/dpdk/spdk1/fbarray_memzone 01:38:00.430 Removing: /var/run/dpdk/spdk1/hugepage_info 01:38:00.430 Removing: /var/run/dpdk/spdk2/config 01:38:00.430 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 01:38:00.430 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 01:38:00.430 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 01:38:00.430 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 01:38:00.430 Removing: /var/run/dpdk/spdk2/fbarray_memzone 01:38:00.430 Removing: /var/run/dpdk/spdk2/hugepage_info 01:38:00.430 Removing: /var/run/dpdk/spdk3/config 01:38:00.430 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 01:38:00.430 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 01:38:00.430 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 01:38:00.430 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 01:38:00.430 Removing: /var/run/dpdk/spdk3/fbarray_memzone 01:38:00.430 Removing: /var/run/dpdk/spdk3/hugepage_info 01:38:00.430 Removing: /var/run/dpdk/spdk4/config 01:38:00.430 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 01:38:00.430 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 01:38:00.430 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 01:38:00.430 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 01:38:00.430 Removing: /var/run/dpdk/spdk4/fbarray_memzone 01:38:00.430 Removing: /var/run/dpdk/spdk4/hugepage_info 01:38:00.430 Removing: /dev/shm/nvmf_trace.0 01:38:00.430 Removing: /dev/shm/spdk_tgt_trace.pid57751 01:38:00.430 Removing: /var/run/dpdk/spdk0 01:38:00.430 Removing: /var/run/dpdk/spdk1 01:38:00.430 Removing: /var/run/dpdk/spdk2 01:38:00.430 Removing: /var/run/dpdk/spdk3 01:38:00.430 Removing: /var/run/dpdk/spdk4 01:38:00.430 Removing: /var/run/dpdk/spdk_pid100323 01:38:00.430 Removing: /var/run/dpdk/spdk_pid100372 01:38:00.430 Removing: /var/run/dpdk/spdk_pid100727 01:38:00.430 Removing: /var/run/dpdk/spdk_pid100777 01:38:00.430 Removing: /var/run/dpdk/spdk_pid101173 01:38:00.430 Removing: /var/run/dpdk/spdk_pid101740 01:38:00.430 Removing: /var/run/dpdk/spdk_pid102169 01:38:00.430 Removing: /var/run/dpdk/spdk_pid103214 01:38:00.430 Removing: /var/run/dpdk/spdk_pid104271 01:38:00.430 Removing: /var/run/dpdk/spdk_pid104389 01:38:00.430 Removing: /var/run/dpdk/spdk_pid104453 01:38:00.430 Removing: /var/run/dpdk/spdk_pid106050 01:38:00.430 Removing: /var/run/dpdk/spdk_pid106372 01:38:00.430 Removing: /var/run/dpdk/spdk_pid106717 01:38:00.430 Removing: /var/run/dpdk/spdk_pid107280 01:38:00.430 Removing: /var/run/dpdk/spdk_pid107285 01:38:00.430 Removing: /var/run/dpdk/spdk_pid107699 01:38:00.430 Removing: /var/run/dpdk/spdk_pid107859 01:38:00.430 Removing: /var/run/dpdk/spdk_pid108021 01:38:00.430 Removing: /var/run/dpdk/spdk_pid108118 01:38:00.430 Removing: /var/run/dpdk/spdk_pid108272 01:38:00.431 Removing: /var/run/dpdk/spdk_pid108381 01:38:00.431 Removing: /var/run/dpdk/spdk_pid109107 01:38:00.431 Removing: /var/run/dpdk/spdk_pid109142 01:38:00.431 Removing: /var/run/dpdk/spdk_pid109183 01:38:00.431 Removing: /var/run/dpdk/spdk_pid109432 01:38:00.431 Removing: /var/run/dpdk/spdk_pid109473 01:38:00.431 Removing: /var/run/dpdk/spdk_pid109508 01:38:00.431 Removing: /var/run/dpdk/spdk_pid109979 01:38:00.431 Removing: /var/run/dpdk/spdk_pid109996 01:38:00.431 Removing: /var/run/dpdk/spdk_pid110453 01:38:00.431 Removing: /var/run/dpdk/spdk_pid110605 01:38:00.431 Removing: /var/run/dpdk/spdk_pid110641 01:38:00.431 Removing: /var/run/dpdk/spdk_pid57603 01:38:00.431 Removing: /var/run/dpdk/spdk_pid57751 01:38:00.431 Removing: /var/run/dpdk/spdk_pid58014 01:38:00.431 Removing: /var/run/dpdk/spdk_pid58101 01:38:00.431 Removing: /var/run/dpdk/spdk_pid58135 01:38:00.431 Removing: /var/run/dpdk/spdk_pid58245 01:38:00.431 Removing: /var/run/dpdk/spdk_pid58275 01:38:00.431 Removing: /var/run/dpdk/spdk_pid58403 01:38:00.431 Removing: /var/run/dpdk/spdk_pid58677 01:38:00.431 Removing: /var/run/dpdk/spdk_pid58850 01:38:00.431 Removing: /var/run/dpdk/spdk_pid58940 01:38:00.431 Removing: /var/run/dpdk/spdk_pid59035 01:38:00.431 Removing: /var/run/dpdk/spdk_pid59132 01:38:00.431 Removing: /var/run/dpdk/spdk_pid59165 01:38:00.431 Removing: /var/run/dpdk/spdk_pid59201 01:38:00.431 Removing: /var/run/dpdk/spdk_pid59270 01:38:00.431 Removing: /var/run/dpdk/spdk_pid59371 01:38:00.431 Removing: /var/run/dpdk/spdk_pid59982 01:38:00.431 Removing: /var/run/dpdk/spdk_pid60040 01:38:00.431 Removing: /var/run/dpdk/spdk_pid60104 01:38:00.431 Removing: /var/run/dpdk/spdk_pid60132 01:38:00.431 Removing: /var/run/dpdk/spdk_pid60201 01:38:00.431 Removing: /var/run/dpdk/spdk_pid60223 01:38:00.431 Removing: /var/run/dpdk/spdk_pid60292 01:38:00.431 Removing: /var/run/dpdk/spdk_pid60307 01:38:00.431 Removing: /var/run/dpdk/spdk_pid60353 01:38:00.431 Removing: /var/run/dpdk/spdk_pid60383 01:38:00.431 Removing: /var/run/dpdk/spdk_pid60429 01:38:00.431 Removing: /var/run/dpdk/spdk_pid60459 01:38:00.431 Removing: /var/run/dpdk/spdk_pid60609 01:38:00.431 Removing: /var/run/dpdk/spdk_pid60639 01:38:00.431 Removing: /var/run/dpdk/spdk_pid60721 01:38:00.431 Removing: /var/run/dpdk/spdk_pid61197 01:38:00.431 Removing: /var/run/dpdk/spdk_pid61587 01:38:00.431 Removing: /var/run/dpdk/spdk_pid64465 01:38:00.431 Removing: /var/run/dpdk/spdk_pid64511 01:38:00.431 Removing: /var/run/dpdk/spdk_pid64868 01:38:00.431 Removing: /var/run/dpdk/spdk_pid64917 01:38:00.431 Removing: /var/run/dpdk/spdk_pid65319 01:38:00.431 Removing: /var/run/dpdk/spdk_pid65888 01:38:00.431 Removing: /var/run/dpdk/spdk_pid66305 01:38:00.431 Removing: /var/run/dpdk/spdk_pid67361 01:38:00.431 Removing: /var/run/dpdk/spdk_pid68441 01:38:00.431 Removing: /var/run/dpdk/spdk_pid68564 01:38:00.431 Removing: /var/run/dpdk/spdk_pid68627 01:38:00.693 Removing: /var/run/dpdk/spdk_pid70227 01:38:00.693 Removing: /var/run/dpdk/spdk_pid70574 01:38:00.693 Removing: /var/run/dpdk/spdk_pid74439 01:38:00.693 Removing: /var/run/dpdk/spdk_pid74880 01:38:00.693 Removing: /var/run/dpdk/spdk_pid75480 01:38:00.693 Removing: /var/run/dpdk/spdk_pid76007 01:38:00.693 Removing: /var/run/dpdk/spdk_pid81413 01:38:00.693 Removing: /var/run/dpdk/spdk_pid81901 01:38:00.693 Removing: /var/run/dpdk/spdk_pid82015 01:38:00.693 Removing: /var/run/dpdk/spdk_pid82173 01:38:00.693 Removing: /var/run/dpdk/spdk_pid82220 01:38:00.693 Removing: /var/run/dpdk/spdk_pid82273 01:38:00.693 Removing: /var/run/dpdk/spdk_pid82325 01:38:00.693 Removing: /var/run/dpdk/spdk_pid82492 01:38:00.693 Removing: /var/run/dpdk/spdk_pid82642 01:38:00.693 Removing: /var/run/dpdk/spdk_pid82929 01:38:00.693 Removing: /var/run/dpdk/spdk_pid83052 01:38:00.693 Removing: /var/run/dpdk/spdk_pid83302 01:38:00.693 Removing: /var/run/dpdk/spdk_pid83426 01:38:00.693 Removing: /var/run/dpdk/spdk_pid83551 01:38:00.693 Removing: /var/run/dpdk/spdk_pid83937 01:38:00.693 Removing: /var/run/dpdk/spdk_pid84386 01:38:00.693 Removing: /var/run/dpdk/spdk_pid84387 01:38:00.693 Removing: /var/run/dpdk/spdk_pid84388 01:38:00.693 Removing: /var/run/dpdk/spdk_pid84670 01:38:00.693 Removing: /var/run/dpdk/spdk_pid84952 01:38:00.693 Removing: /var/run/dpdk/spdk_pid85368 01:38:00.693 Removing: /var/run/dpdk/spdk_pid85729 01:38:00.693 Removing: /var/run/dpdk/spdk_pid86333 01:38:00.693 Removing: /var/run/dpdk/spdk_pid86345 01:38:00.693 Removing: /var/run/dpdk/spdk_pid86731 01:38:00.693 Removing: /var/run/dpdk/spdk_pid86751 01:38:00.693 Removing: /var/run/dpdk/spdk_pid86765 01:38:00.693 Removing: /var/run/dpdk/spdk_pid86800 01:38:00.693 Removing: /var/run/dpdk/spdk_pid86806 01:38:00.693 Removing: /var/run/dpdk/spdk_pid87218 01:38:00.693 Removing: /var/run/dpdk/spdk_pid87266 01:38:00.693 Removing: /var/run/dpdk/spdk_pid87648 01:38:00.693 Removing: /var/run/dpdk/spdk_pid87900 01:38:00.693 Removing: /var/run/dpdk/spdk_pid88440 01:38:00.693 Removing: /var/run/dpdk/spdk_pid89080 01:38:00.693 Removing: /var/run/dpdk/spdk_pid90444 01:38:00.693 Removing: /var/run/dpdk/spdk_pid91093 01:38:00.693 Removing: /var/run/dpdk/spdk_pid91100 01:38:00.693 Removing: /var/run/dpdk/spdk_pid93215 01:38:00.693 Removing: /var/run/dpdk/spdk_pid93302 01:38:00.693 Removing: /var/run/dpdk/spdk_pid93381 01:38:00.693 Removing: /var/run/dpdk/spdk_pid93472 01:38:00.693 Removing: /var/run/dpdk/spdk_pid93625 01:38:00.693 Removing: /var/run/dpdk/spdk_pid93715 01:38:00.693 Removing: /var/run/dpdk/spdk_pid93785 01:38:00.693 Removing: /var/run/dpdk/spdk_pid93856 01:38:00.693 Removing: /var/run/dpdk/spdk_pid94246 01:38:00.693 Removing: /var/run/dpdk/spdk_pid95012 01:38:00.693 Removing: /var/run/dpdk/spdk_pid96439 01:38:00.693 Removing: /var/run/dpdk/spdk_pid96645 01:38:00.693 Removing: /var/run/dpdk/spdk_pid96935 01:38:00.693 Removing: /var/run/dpdk/spdk_pid97481 01:38:00.693 Removing: /var/run/dpdk/spdk_pid97872 01:38:00.693 Clean 01:38:00.693 05:32:52 -- common/autotest_common.sh@1453 -- # return 0 01:38:00.693 05:32:52 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 01:38:00.693 05:32:52 -- common/autotest_common.sh@732 -- # xtrace_disable 01:38:00.693 05:32:52 -- common/autotest_common.sh@10 -- # set +x 01:38:00.693 05:32:52 -- spdk/autotest.sh@391 -- # timing_exit autotest 01:38:00.693 05:32:52 -- common/autotest_common.sh@732 -- # xtrace_disable 01:38:00.693 05:32:52 -- common/autotest_common.sh@10 -- # set +x 01:38:00.693 05:32:52 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:38:00.693 05:32:52 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 01:38:00.693 05:32:52 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 01:38:00.693 05:32:52 -- spdk/autotest.sh@396 -- # [[ y == y ]] 01:38:00.693 05:32:52 -- spdk/autotest.sh@398 -- # hostname 01:38:00.693 05:32:52 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 01:38:00.952 geninfo: WARNING: invalid characters removed from testname! 01:38:27.490 05:33:15 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:38:27.490 05:33:18 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:38:29.388 05:33:20 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:38:31.290 05:33:22 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:38:33.820 05:33:24 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:38:35.720 05:33:27 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:38:37.663 05:33:28 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 01:38:37.663 05:33:28 -- spdk/autorun.sh@1 -- $ timing_finish 01:38:37.663 05:33:28 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 01:38:37.663 05:33:28 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 01:38:37.663 05:33:28 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 01:38:37.663 05:33:28 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:38:37.663 + [[ -n 4985 ]] 01:38:37.663 + sudo kill 4985 01:38:37.670 [Pipeline] } 01:38:37.685 [Pipeline] // timeout 01:38:37.690 [Pipeline] } 01:38:37.703 [Pipeline] // stage 01:38:37.708 [Pipeline] } 01:38:37.721 [Pipeline] // catchError 01:38:37.729 [Pipeline] stage 01:38:37.731 [Pipeline] { (Stop VM) 01:38:37.744 [Pipeline] sh 01:38:38.022 + vagrant halt 01:38:40.548 ==> default: Halting domain... 01:38:43.833 [Pipeline] sh 01:38:44.108 + vagrant destroy -f 01:38:46.636 ==> default: Removing domain... 01:38:46.905 [Pipeline] sh 01:38:47.183 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 01:38:47.208 [Pipeline] } 01:38:47.223 [Pipeline] // stage 01:38:47.228 [Pipeline] } 01:38:47.244 [Pipeline] // dir 01:38:47.249 [Pipeline] } 01:38:47.265 [Pipeline] // wrap 01:38:47.271 [Pipeline] } 01:38:47.283 [Pipeline] // catchError 01:38:47.292 [Pipeline] stage 01:38:47.294 [Pipeline] { (Epilogue) 01:38:47.306 [Pipeline] sh 01:38:47.607 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:38:54.169 [Pipeline] catchError 01:38:54.171 [Pipeline] { 01:38:54.182 [Pipeline] sh 01:38:54.468 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:38:54.468 Artifacts sizes are good 01:38:54.478 [Pipeline] } 01:38:54.491 [Pipeline] // catchError 01:38:54.502 [Pipeline] archiveArtifacts 01:38:54.509 Archiving artifacts 01:38:54.606 [Pipeline] cleanWs 01:38:54.618 [WS-CLEANUP] Deleting project workspace... 01:38:54.618 [WS-CLEANUP] Deferred wipeout is used... 01:38:54.625 [WS-CLEANUP] done 01:38:54.627 [Pipeline] } 01:38:54.641 [Pipeline] // stage 01:38:54.646 [Pipeline] } 01:38:54.659 [Pipeline] // node 01:38:54.664 [Pipeline] End of Pipeline 01:38:54.697 Finished: SUCCESS